Test Report: Hyperkit_macOS 19546

                    
                      9c905d7ddc6fcb24a41b70e16c9a4a5dd3740602:2024-10-03:36493
                    
                

Test fail (28/220)

x
+
TestOffline (195.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-463000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-463000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.702336816s)

                                                
                                                
-- stdout --
	* [offline-docker-463000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-463000" primary control-plane node in "offline-docker-463000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-463000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 21:04:27.323607    6686 out.go:345] Setting OutFile to fd 1 ...
	I1003 21:04:27.323902    6686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:04:27.323907    6686 out.go:358] Setting ErrFile to fd 2...
	I1003 21:04:27.323910    6686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:04:27.324090    6686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 21:04:27.326000    6686 out.go:352] Setting JSON to false
	I1003 21:04:27.358463    6686 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5637,"bootTime":1728009030,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 21:04:27.358573    6686 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 21:04:27.415361    6686 out.go:177] * [offline-docker-463000] minikube v1.34.0 on Darwin 15.0.1
	I1003 21:04:27.479195    6686 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 21:04:27.479195    6686 notify.go:220] Checking for updates...
	I1003 21:04:27.521390    6686 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 21:04:27.542354    6686 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 21:04:27.563148    6686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 21:04:27.584313    6686 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:04:27.605382    6686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 21:04:27.626484    6686 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 21:04:27.658257    6686 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 21:04:27.700113    6686 start.go:297] selected driver: hyperkit
	I1003 21:04:27.700128    6686 start.go:901] validating driver "hyperkit" against <nil>
	I1003 21:04:27.700139    6686 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 21:04:27.705251    6686 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:04:27.705402    6686 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 21:04:27.716716    6686 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 21:04:27.723209    6686 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:04:27.723231    6686 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 21:04:27.723270    6686 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 21:04:27.723531    6686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 21:04:27.723566    6686 cni.go:84] Creating CNI manager for ""
	I1003 21:04:27.723603    6686 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 21:04:27.723610    6686 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 21:04:27.723683    6686 start.go:340] cluster config:
	{Name:offline-docker-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 21:04:27.723766    6686 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:04:27.745371    6686 out.go:177] * Starting "offline-docker-463000" primary control-plane node in "offline-docker-463000" cluster
	I1003 21:04:27.766295    6686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 21:04:27.766341    6686 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 21:04:27.766354    6686 cache.go:56] Caching tarball of preloaded images
	I1003 21:04:27.766566    6686 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 21:04:27.766584    6686 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 21:04:27.767124    6686 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/offline-docker-463000/config.json ...
	I1003 21:04:27.767145    6686 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/offline-docker-463000/config.json: {Name:mk361fdb6f03bdaeb0b768106f5833372dc92550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 21:04:27.767636    6686 start.go:360] acquireMachinesLock for offline-docker-463000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:04:27.767714    6686 start.go:364] duration metric: took 62.007µs to acquireMachinesLock for "offline-docker-463000"
	I1003 21:04:27.767737    6686 start.go:93] Provisioning new machine with config: &{Name:offline-docker-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:04:27.767787    6686 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:04:27.789133    6686 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:04:27.789298    6686 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:04:27.789334    6686 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:04:27.800116    6686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53443
	I1003 21:04:27.800509    6686 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:04:27.801001    6686 main.go:141] libmachine: Using API Version  1
	I1003 21:04:27.801014    6686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:04:27.801234    6686 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:04:27.801335    6686 main.go:141] libmachine: (offline-docker-463000) Calling .GetMachineName
	I1003 21:04:27.801453    6686 main.go:141] libmachine: (offline-docker-463000) Calling .DriverName
	I1003 21:04:27.801586    6686 start.go:159] libmachine.API.Create for "offline-docker-463000" (driver="hyperkit")
	I1003 21:04:27.801615    6686 client.go:168] LocalClient.Create starting
	I1003 21:04:27.801645    6686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:04:27.801701    6686 main.go:141] libmachine: Decoding PEM data...
	I1003 21:04:27.801715    6686 main.go:141] libmachine: Parsing certificate...
	I1003 21:04:27.801798    6686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:04:27.801847    6686 main.go:141] libmachine: Decoding PEM data...
	I1003 21:04:27.801858    6686 main.go:141] libmachine: Parsing certificate...
	I1003 21:04:27.801876    6686 main.go:141] libmachine: Running pre-create checks...
	I1003 21:04:27.801882    6686 main.go:141] libmachine: (offline-docker-463000) Calling .PreCreateCheck
	I1003 21:04:27.801967    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:27.802139    6686 main.go:141] libmachine: (offline-docker-463000) Calling .GetConfigRaw
	I1003 21:04:27.810840    6686 main.go:141] libmachine: Creating machine...
	I1003 21:04:27.810858    6686 main.go:141] libmachine: (offline-docker-463000) Calling .Create
	I1003 21:04:27.811031    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:27.811322    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:04:27.811017    6706 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:04:27.811439    6686 main.go:141] libmachine: (offline-docker-463000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:04:28.285706    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:04:28.285624    6706 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/id_rsa...
	I1003 21:04:28.391394    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:04:28.391311    6706 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/offline-docker-463000.rawdisk...
	I1003 21:04:28.391413    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Writing magic tar header
	I1003 21:04:28.391436    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Writing SSH key tar header
	I1003 21:04:28.391857    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:04:28.391813    6706 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000 ...
	I1003 21:04:28.798657    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:28.798675    6686 main.go:141] libmachine: (offline-docker-463000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/hyperkit.pid
	I1003 21:04:28.798684    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Using UUID 3ce3c5ac-1b28-4143-94d8-33fe952f3ea0
	I1003 21:04:28.907177    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Generated MAC 2:2:7a:6e:17:f1
	I1003 21:04:28.907215    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-463000
	I1003 21:04:28.907277    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ce3c5ac-1b28-4143-94d8-33fe952f3ea0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I1003 21:04:28.907323    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ce3c5ac-1b28-4143-94d8-33fe952f3ea0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I1003 21:04:28.907385    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ce3c5ac-1b28-4143-94d8-33fe952f3ea0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/offline-docker-463000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage,
/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-463000"}
	I1003 21:04:28.907480    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ce3c5ac-1b28-4143-94d8-33fe952f3ea0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/offline-docker-463000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machi
nes/offline-docker-463000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-463000"
	I1003 21:04:28.907502    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:04:28.910808    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 DEBUG: hyperkit: Pid is 6726
	I1003 21:04:28.911408    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 0
	I1003 21:04:28.911421    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:28.911519    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:28.912676    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:28.912780    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:28.912794    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:28.912815    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:28.912830    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:28.912852    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:28.912867    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:28.912889    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:28.912916    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:28.912930    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:28.912945    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:28.912971    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:28.912989    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:28.913002    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:28.913019    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:28.913032    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:28.913049    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:28.913068    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:28.913084    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:28.921307    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:04:28.978112    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:04:28.979060    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:04:28.979083    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:04:28.979109    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:04:28.979125    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:04:29.355712    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:04:29.355738    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:04:29.470587    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:04:29.470609    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:04:29.470620    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:04:29.470633    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:04:29.471451    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:04:29.471461    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:04:30.913271    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 1
	I1003 21:04:30.913295    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:30.913355    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:30.914252    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:30.914297    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:30.914318    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:30.914328    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:30.914335    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:30.914341    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:30.914347    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:30.914355    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:30.914362    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:30.914387    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:30.914400    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:30.914409    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:30.914416    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:30.914421    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:30.914434    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:30.914446    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:30.914453    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:30.914461    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:30.914471    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:32.915223    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 2
	I1003 21:04:32.915243    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:32.915326    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:32.916264    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:32.916309    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:32.916328    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:32.916342    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:32.916351    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:32.916358    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:32.916367    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:32.916391    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:32.916412    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:32.916421    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:32.916428    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:32.916437    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:32.916443    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:32.916451    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:32.916460    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:32.916468    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:32.916475    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:32.916482    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:32.916503    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:34.835810    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1003 21:04:34.835953    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1003 21:04:34.835962    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1003 21:04:34.856110    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:04:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1003 21:04:34.918461    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 3
	I1003 21:04:34.918481    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:34.918554    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:34.919479    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:34.919554    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:34.919561    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:34.919567    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:34.919572    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:34.919579    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:34.919585    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:34.919612    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:34.919625    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:34.919653    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:34.919666    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:34.919675    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:34.919682    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:34.919689    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:34.919701    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:34.919709    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:34.919716    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:34.919722    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:34.919729    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:36.920823    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 4
	I1003 21:04:36.920838    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:36.920928    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:36.921838    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:36.921936    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:36.921943    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:36.921952    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:36.921957    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:36.921963    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:36.921976    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:36.921983    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:36.921989    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:36.921994    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:36.922011    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:36.922017    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:36.922023    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:36.922029    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:36.922039    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:36.922045    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:36.922050    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:36.922057    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:36.922082    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:38.924092    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 5
	I1003 21:04:38.924115    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:38.924180    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:38.925082    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:38.925133    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:38.925163    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:38.925174    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:38.925183    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:38.925199    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:38.925207    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:38.925219    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:38.925226    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:38.925232    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:38.925237    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:38.925243    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:38.925250    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:38.925258    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:38.925266    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:38.925274    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:38.925280    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:38.925286    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:38.925291    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:40.927268    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 6
	I1003 21:04:40.927279    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:40.927342    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:40.928209    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:40.928283    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:40.928296    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:40.928317    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:40.928325    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:40.928335    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:40.928341    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:40.928347    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:40.928355    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:40.928361    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:40.928367    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:40.928380    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:40.928395    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:40.928402    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:40.928410    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:40.928416    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:40.928423    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:40.928438    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:40.928449    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:42.929567    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 7
	I1003 21:04:42.929583    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:42.929657    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:42.930534    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:42.930608    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:42.930617    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:42.930628    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:42.930636    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:42.930642    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:42.930669    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:42.930700    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:42.930712    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:42.930719    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:42.930727    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:42.930733    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:42.930750    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:42.930762    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:42.930773    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:42.930782    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:42.930788    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:42.930793    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:42.930806    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:44.931248    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 8
	I1003 21:04:44.931262    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:44.931360    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:44.932248    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:44.932350    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:44.932383    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:44.932391    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:44.932396    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:44.932402    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:44.932410    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:44.932416    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:44.932435    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:44.932443    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:44.932461    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:44.932474    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:44.932484    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:44.932492    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:44.932499    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:44.932507    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:44.932513    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:44.932520    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:44.932580    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:46.933444    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 9
	I1003 21:04:46.933465    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:46.933512    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:46.934406    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:46.934460    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:46.934475    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:46.934488    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:46.934496    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:46.934502    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:46.934508    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:46.934513    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:46.934519    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:46.934530    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:46.934538    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:46.934545    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:46.934551    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:46.934557    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:46.934562    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:46.934571    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:46.934578    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:46.934585    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:46.934593    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:48.936031    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 10
	I1003 21:04:48.936043    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:48.936105    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:48.936989    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:48.937049    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:48.937062    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:48.937074    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:48.937081    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:48.937087    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:48.937092    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:48.937098    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:48.937103    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:48.937121    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:48.937141    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:48.937153    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:48.937163    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:48.937171    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:48.937186    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:48.937196    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:48.937207    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:48.937214    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:48.937219    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:50.937587    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 11
	I1003 21:04:50.937610    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:50.937684    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:50.938576    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:50.938638    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:50.938648    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:50.938662    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:50.938677    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:50.938692    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:50.938706    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:50.938718    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:50.938727    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:50.938736    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:50.938744    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:50.938754    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:50.938763    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:50.938776    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:50.938786    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:50.938794    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:50.938801    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:50.938825    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:50.938835    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:52.940810    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 12
	I1003 21:04:52.940824    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:52.940957    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:52.941875    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:52.941969    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:52.942001    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:52.942009    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:52.942016    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:52.942025    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:52.942034    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:52.942060    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:52.942073    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:52.942081    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:52.942092    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:52.942100    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:52.942106    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:52.942112    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:52.942117    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:52.942124    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:52.942132    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:52.942146    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:52.942158    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:54.942919    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 13
	I1003 21:04:54.942939    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:54.943032    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:54.944002    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:54.944061    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:54.944071    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:54.944083    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:54.944091    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:54.944098    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:54.944104    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:54.944110    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:54.944117    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:54.944123    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:54.944129    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:54.944135    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:54.944149    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:54.944164    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:54.944173    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:54.944181    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:54.944191    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:54.944200    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:54.944209    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:56.946219    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 14
	I1003 21:04:56.946232    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:56.946278    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:56.947232    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:56.947303    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:56.947315    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:56.947323    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:56.947329    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:56.947335    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:56.947342    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:56.947357    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:56.947364    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:56.947382    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:56.947402    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:56.947412    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:56.947420    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:56.947425    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:56.947430    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:56.947437    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:56.947444    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:56.947454    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:56.947477    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:04:58.948516    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 15
	I1003 21:04:58.948528    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:04:58.948665    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:04:58.949795    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:04:58.949852    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:04:58.949863    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:04:58.949879    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:04:58.949887    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:04:58.949894    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:04:58.949899    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:04:58.949926    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:04:58.949937    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:04:58.949945    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:04:58.949965    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:04:58.949976    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:04:58.949984    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:04:58.950002    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:04:58.950016    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:04:58.950033    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:04:58.950042    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:04:58.950060    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:04:58.950072    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:00.950458    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 16
	I1003 21:05:00.950473    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:00.950620    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:00.951482    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:00.951524    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:00.951536    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:00.951545    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:00.951551    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:00.951557    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:00.951563    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:00.951578    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:00.951587    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:00.951594    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:00.951603    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:00.951609    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:00.951617    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:00.951623    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:00.951631    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:00.951638    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:00.951647    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:00.951658    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:00.951666    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:02.952719    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 17
	I1003 21:05:02.952735    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:02.952785    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:02.953666    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:02.953719    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:02.953731    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:02.953740    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:02.953746    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:02.953780    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:02.953796    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:02.953808    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:02.953815    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:02.953821    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:02.953835    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:02.953847    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:02.953859    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:02.953871    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:02.953888    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:02.953899    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:02.953907    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:02.953913    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:02.953918    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:04.955177    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 18
	I1003 21:05:04.955193    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:04.955266    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:04.956122    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:04.956170    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:04.956179    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:04.956186    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:04.956191    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:04.956197    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:04.956219    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:04.956231    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:04.956240    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:04.956253    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:04.956272    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:04.956280    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:04.956286    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:04.956294    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:04.956300    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:04.956306    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:04.956316    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:04.956329    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:04.956338    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:06.956345    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 19
	I1003 21:05:06.956357    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:06.956451    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:06.957344    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:06.957366    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:06.957393    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:06.957401    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:06.957410    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:06.957418    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:06.957433    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:06.957446    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:06.957460    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:06.957471    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:06.957479    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:06.957485    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:06.957491    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:06.957500    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:06.957508    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:06.957536    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:06.957549    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:06.957557    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:06.957566    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:08.959582    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 20
	I1003 21:05:08.959597    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:08.959662    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:08.960547    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:08.960617    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:08.960630    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:08.960639    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:08.960645    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:08.960659    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:08.960674    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:08.960686    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:08.960693    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:08.960698    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:08.960706    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:08.960717    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:08.960733    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:08.960747    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:08.960755    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:08.960762    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:08.960769    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:08.960778    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:08.960786    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:10.962753    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 21
	I1003 21:05:10.962767    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:10.962801    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:10.963698    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:10.963750    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:10.963763    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:10.963772    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:10.963779    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:10.963785    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:10.963794    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:10.963808    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:10.963826    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:10.963836    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:10.963846    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:10.963853    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:10.963860    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:10.963867    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:10.963874    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:10.963885    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:10.963892    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:10.963899    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:10.963907    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:12.965875    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 22
	I1003 21:05:12.965888    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:12.965976    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:12.966812    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:12.966865    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:12.966877    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:12.966886    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:12.966894    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:12.966905    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:12.966912    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:12.966919    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:12.966926    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:12.966932    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:12.966938    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:12.966946    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:12.966952    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:12.966962    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:12.966973    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:12.966981    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:12.966998    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:12.967011    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:12.967026    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:14.968200    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 23
	I1003 21:05:14.968215    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:14.968318    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:14.969214    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:14.969285    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:14.969295    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:14.969303    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:14.969310    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:14.969317    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:14.969326    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:14.969334    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:14.969358    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:14.969406    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:14.969442    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:14.969450    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:14.969460    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:14.969476    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:14.969488    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:14.969505    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:14.969513    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:14.969520    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:14.969527    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:16.969441    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 24
	I1003 21:05:16.969454    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:16.969550    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:16.970432    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:16.970468    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:16.970479    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:16.970493    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:16.970501    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:16.970510    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:16.970520    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:16.970528    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:16.970563    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:16.970578    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:16.970588    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:16.970604    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:16.970612    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:16.970618    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:16.970625    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:16.970636    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:16.970645    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:16.970652    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:16.970661    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:18.972682    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 25
	I1003 21:05:18.972719    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:18.972780    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:18.973668    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:18.973730    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:18.973740    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:18.973749    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:18.973758    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:18.973767    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:18.973777    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:18.973784    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:18.973801    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:18.973809    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:18.973816    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:18.973823    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:18.973829    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:18.973835    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:18.973841    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:18.973859    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:18.973881    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:18.973893    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:18.973902    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:20.975881    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 26
	I1003 21:05:20.975892    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:20.975998    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:20.976850    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:20.976906    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:20.976914    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:20.976923    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:20.976928    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:20.976944    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:20.976950    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:20.976957    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:20.976962    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:20.976968    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:20.976973    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:20.976978    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:20.976985    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:20.977002    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:20.977014    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:20.977025    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:20.977033    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:20.977042    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:20.977047    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:22.979034    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 27
	I1003 21:05:22.979049    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:22.979164    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:22.980015    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:22.980055    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:22.980065    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:22.980090    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:22.980104    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:22.980112    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:22.980119    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:22.980130    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:22.980136    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:22.980142    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:22.980149    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:22.980156    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:22.980161    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:22.980178    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:22.980194    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:22.980201    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:22.980209    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:22.980216    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:22.980221    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:24.980237    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 28
	I1003 21:05:24.980253    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:24.980331    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:24.981221    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:24.981265    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:24.981273    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:24.981280    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:24.981287    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:24.981293    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:24.981300    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:24.981305    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:24.981311    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:24.981323    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:24.981331    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:24.981337    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:24.981345    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:24.981351    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:24.981356    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:24.981363    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:24.981371    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:24.981382    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:24.981389    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:26.981458    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 29
	I1003 21:05:26.981470    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:26.981583    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:26.982456    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for 2:2:7a:6e:17:f1 in /var/db/dhcpd_leases ...
	I1003 21:05:26.982499    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:26.982513    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:26.982529    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:26.982537    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:26.982543    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:26.982554    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:26.982562    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:26.982569    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:26.982575    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:26.982583    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:26.982589    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:26.982596    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:26.982610    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:26.982622    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:26.982642    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:26.982655    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:26.982666    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:26.982678    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:28.982877    6686 client.go:171] duration metric: took 1m1.181420143s to LocalClient.Create
	I1003 21:05:30.984456    6686 start.go:128] duration metric: took 1m3.216816606s to createHost
	I1003 21:05:30.984471    6686 start.go:83] releasing machines lock for "offline-docker-463000", held for 1m3.216926286s
	W1003 21:05:30.984509    6686 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2:2:7a:6e:17:f1
	I1003 21:05:30.984916    6686 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:05:30.984941    6686 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:05:30.996599    6686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53479
	I1003 21:05:30.996929    6686 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:05:30.997295    6686 main.go:141] libmachine: Using API Version  1
	I1003 21:05:30.997311    6686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:05:30.997530    6686 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:05:30.997903    6686 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:05:30.997925    6686 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:05:31.008933    6686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53481
	I1003 21:05:31.009250    6686 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:05:31.009589    6686 main.go:141] libmachine: Using API Version  1
	I1003 21:05:31.009599    6686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:05:31.009809    6686 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:05:31.009974    6686 main.go:141] libmachine: (offline-docker-463000) Calling .GetState
	I1003 21:05:31.010061    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.010134    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:31.011204    6686 main.go:141] libmachine: (offline-docker-463000) Calling .DriverName
	I1003 21:05:31.031803    6686 out.go:177] * Deleting "offline-docker-463000" in hyperkit ...
	I1003 21:05:31.074039    6686 main.go:141] libmachine: (offline-docker-463000) Calling .Remove
	I1003 21:05:31.074181    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.074190    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.074243    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:31.075318    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.075366    6686 main.go:141] libmachine: (offline-docker-463000) DBG | waiting for graceful shutdown
	I1003 21:05:32.077508    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:32.077645    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:32.078928    6686 main.go:141] libmachine: (offline-docker-463000) DBG | waiting for graceful shutdown
	I1003 21:05:33.079216    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:33.079281    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:33.080890    6686 main.go:141] libmachine: (offline-docker-463000) DBG | waiting for graceful shutdown
	I1003 21:05:34.081511    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:34.081570    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:34.082222    6686 main.go:141] libmachine: (offline-docker-463000) DBG | waiting for graceful shutdown
	I1003 21:05:35.083945    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:35.084066    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:35.085085    6686 main.go:141] libmachine: (offline-docker-463000) DBG | waiting for graceful shutdown
	I1003 21:05:36.085676    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:36.085868    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6726
	I1003 21:05:36.086487    6686 main.go:141] libmachine: (offline-docker-463000) DBG | sending sigkill
	I1003 21:05:36.086497    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1003 21:05:36.097835    6686 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2:2:7a:6e:17:f1
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2:2:7a:6e:17:f1
	I1003 21:05:36.097855    6686 start.go:729] Will try again in 5 seconds ...
	I1003 21:05:36.108785    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:05:36 WARN : hyperkit: failed to read stdout: EOF
	I1003 21:05:36.108803    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:05:36 WARN : hyperkit: failed to read stderr: EOF
	I1003 21:05:41.098028    6686 start.go:360] acquireMachinesLock for offline-docker-463000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:06:33.798944    6686 start.go:364] duration metric: took 52.701033869s to acquireMachinesLock for "offline-docker-463000"
	I1003 21:06:33.798986    6686 start.go:93] Provisioning new machine with config: &{Name:offline-docker-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:06:33.799037    6686 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:06:33.820443    6686 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:06:33.820621    6686 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:06:33.820649    6686 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:06:33.831950    6686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53489
	I1003 21:06:33.832277    6686 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:06:33.832586    6686 main.go:141] libmachine: Using API Version  1
	I1003 21:06:33.832596    6686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:06:33.832838    6686 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:06:33.832965    6686 main.go:141] libmachine: (offline-docker-463000) Calling .GetMachineName
	I1003 21:06:33.833053    6686 main.go:141] libmachine: (offline-docker-463000) Calling .DriverName
	I1003 21:06:33.833219    6686 start.go:159] libmachine.API.Create for "offline-docker-463000" (driver="hyperkit")
	I1003 21:06:33.833241    6686 client.go:168] LocalClient.Create starting
	I1003 21:06:33.833268    6686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:06:33.833333    6686 main.go:141] libmachine: Decoding PEM data...
	I1003 21:06:33.833346    6686 main.go:141] libmachine: Parsing certificate...
	I1003 21:06:33.833390    6686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:06:33.833440    6686 main.go:141] libmachine: Decoding PEM data...
	I1003 21:06:33.833451    6686 main.go:141] libmachine: Parsing certificate...
	I1003 21:06:33.833465    6686 main.go:141] libmachine: Running pre-create checks...
	I1003 21:06:33.833470    6686 main.go:141] libmachine: (offline-docker-463000) Calling .PreCreateCheck
	I1003 21:06:33.833577    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:33.833645    6686 main.go:141] libmachine: (offline-docker-463000) Calling .GetConfigRaw
	I1003 21:06:33.869619    6686 main.go:141] libmachine: Creating machine...
	I1003 21:06:33.869629    6686 main.go:141] libmachine: (offline-docker-463000) Calling .Create
	I1003 21:06:33.869732    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:33.869928    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:06:33.869726    6884 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:06:33.869992    6686 main.go:141] libmachine: (offline-docker-463000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:06:34.074400    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:06:34.074289    6884 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/id_rsa...
	I1003 21:06:34.321274    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:06:34.321219    6884 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/offline-docker-463000.rawdisk...
	I1003 21:06:34.321285    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Writing magic tar header
	I1003 21:06:34.321294    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Writing SSH key tar header
	I1003 21:06:34.322008    6686 main.go:141] libmachine: (offline-docker-463000) DBG | I1003 21:06:34.321966    6884 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000 ...
	I1003 21:06:34.686121    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:34.686138    6686 main.go:141] libmachine: (offline-docker-463000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/hyperkit.pid
	I1003 21:06:34.686162    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Using UUID 61c8c2bd-5cd3-4019-9753-8b3deec68147
	I1003 21:06:34.711298    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Generated MAC ae:b0:64:ad:14:84
	I1003 21:06:34.711315    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-463000
	I1003 21:06:34.711346    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"61c8c2bd-5cd3-4019-9753-8b3deec68147", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I1003 21:06:34.711375    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"61c8c2bd-5cd3-4019-9753-8b3deec68147", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I1003 21:06:34.711418    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "61c8c2bd-5cd3-4019-9753-8b3deec68147", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/offline-docker-463000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage,
/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-463000"}
	I1003 21:06:34.711461    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 61c8c2bd-5cd3-4019-9753-8b3deec68147 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/offline-docker-463000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machi
nes/offline-docker-463000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-463000"
	I1003 21:06:34.711479    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:06:34.714347    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 DEBUG: hyperkit: Pid is 6885
	I1003 21:06:34.714839    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 0
	I1003 21:06:34.714873    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:34.714946    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:34.716126    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:34.716228    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:34.716250    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:34.716272    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:34.716307    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:34.716327    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:34.716340    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:34.716350    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:34.716371    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:34.716380    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:34.716387    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:34.716397    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:34.716424    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:34.716433    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:34.716440    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:34.716450    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:34.716469    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:34.716486    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:34.716503    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:34.725045    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:06:34.734116    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/offline-docker-463000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:06:34.735071    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:06:34.735086    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:06:34.735095    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:06:34.735106    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:06:35.110274    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:06:35.110288    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:06:35.224893    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:06:35.224909    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:06:35.224920    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:06:35.224931    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:06:35.225794    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:06:35.225811    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:06:36.716697    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 1
	I1003 21:06:36.716712    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:36.716800    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:36.717707    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:36.717783    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:36.717793    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:36.717802    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:36.717808    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:36.717816    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:36.717824    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:36.717834    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:36.717840    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:36.717848    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:36.717854    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:36.717862    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:36.717870    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:36.717875    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:36.717881    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:36.717889    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:36.717895    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:36.717901    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:36.717910    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:38.718113    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 2
	I1003 21:06:38.718127    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:38.718168    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:38.719131    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:38.719172    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:38.719179    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:38.719190    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:38.719212    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:38.719235    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:38.719245    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:38.719253    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:38.719270    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:38.719289    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:38.719300    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:38.719308    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:38.719320    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:38.719328    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:38.719336    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:38.719342    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:38.719350    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:38.719359    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:38.719366    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:40.575158    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1003 21:06:40.575316    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1003 21:06:40.575327    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1003 21:06:40.594927    6686 main.go:141] libmachine: (offline-docker-463000) DBG | 2024/10/03 21:06:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1003 21:06:40.721145    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 3
	I1003 21:06:40.721169    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:40.721423    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:40.723039    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:40.723199    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:40.723212    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:40.723224    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:40.723231    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:40.723245    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:40.723257    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:40.723273    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:40.723283    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:40.723291    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:40.723298    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:40.723309    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:40.723320    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:40.723329    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:40.723339    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:40.723349    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:40.723359    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:40.723382    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:40.723397    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:42.723399    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 4
	I1003 21:06:42.723414    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:42.723459    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:42.724382    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:42.724394    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:42.724428    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:42.724436    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:42.724441    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:42.724456    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:42.724467    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:42.724495    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:42.724508    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:42.724516    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:42.724527    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:42.724536    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:42.724548    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:42.724558    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:42.724565    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:42.724573    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:42.724581    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:42.724599    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:42.724616    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:44.725100    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 5
	I1003 21:06:44.725111    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:44.725225    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:44.726159    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:44.726202    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:44.726216    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:44.726246    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:44.726274    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:44.726296    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:44.726306    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:44.726313    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:44.726319    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:44.726365    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:44.726392    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:44.726401    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:44.726408    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:44.726425    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:44.726436    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:44.726443    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:44.726448    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:44.726453    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:44.726460    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:46.726432    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 6
	I1003 21:06:46.726446    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:46.726538    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:46.727489    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:46.727542    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:46.727552    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:46.727560    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:46.727568    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:46.727589    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:46.727599    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:46.727614    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:46.727623    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:46.727641    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:46.727657    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:46.727673    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:46.727683    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:46.727691    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:46.727705    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:46.727732    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:46.727764    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:46.727770    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:46.727792    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:48.728575    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 7
	I1003 21:06:48.728590    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:48.728684    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:48.729646    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:48.729699    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:48.729709    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:48.729717    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:48.729726    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:48.729732    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:48.729745    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:48.729753    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:48.729760    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:48.729768    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:48.729777    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:48.729785    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:48.729800    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:48.729818    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:48.729827    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:48.729835    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:48.729843    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:48.729850    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:48.729860    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:50.730359    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 8
	I1003 21:06:50.730373    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:50.730496    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:50.731415    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:50.731459    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:50.731473    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:50.731482    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:50.731488    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:50.731495    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:50.731511    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:50.731518    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:50.731525    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:50.731530    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:50.731543    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:50.731557    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:50.731572    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:50.731584    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:50.731596    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:50.731609    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:50.731617    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:50.731624    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:50.731632    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:52.733649    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 9
	I1003 21:06:52.733663    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:52.733739    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:52.734632    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:52.734663    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:52.734674    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:52.734692    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:52.734702    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:52.734712    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:52.734719    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:52.734735    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:52.734748    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:52.734756    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:52.734764    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:52.734770    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:52.734778    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:52.734797    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:52.734807    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:52.734822    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:52.734837    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:52.734845    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:52.734853    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:54.736290    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 10
	I1003 21:06:54.736312    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:54.736393    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:54.737316    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:54.737380    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:54.737388    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:54.737399    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:54.737407    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:54.737426    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:54.737435    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:54.737453    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:54.737465    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:54.737476    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:54.737484    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:54.737491    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:54.737504    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:54.737512    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:54.737519    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:54.737528    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:54.737535    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:54.737542    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:54.737547    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:56.738495    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 11
	I1003 21:06:56.738513    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:56.738599    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:56.739644    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:56.739683    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:56.739693    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:56.739704    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:56.739710    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:56.739717    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:56.739724    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:56.739732    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:56.739739    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:56.739748    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:56.739757    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:56.739763    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:56.739770    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:56.739799    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:56.739811    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:56.739819    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:56.739826    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:56.739832    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:56.739838    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:58.741065    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 12
	I1003 21:06:58.741089    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:58.741182    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:06:58.742070    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:06:58.742123    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:58.742182    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:58.742191    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:58.742196    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:58.742209    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:58.742220    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:58.742235    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:58.742242    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:58.742249    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:58.742256    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:58.742262    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:58.742270    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:58.742276    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:58.742283    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:58.742292    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:58.742300    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:58.742316    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:58.742329    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:00.743731    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 13
	I1003 21:07:00.743746    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:00.743849    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:00.744793    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:00.744838    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:00.744847    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:00.744856    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:00.744863    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:00.744878    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:00.744890    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:00.744907    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:00.744918    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:00.744927    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:00.744935    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:00.744951    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:00.744962    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:00.744969    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:00.744974    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:00.744988    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:00.745000    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:00.745008    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:00.745014    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:02.745875    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 14
	I1003 21:07:02.745894    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:02.745954    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:02.746841    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:02.746910    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:02.746920    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:02.746948    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:02.746967    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:02.746998    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:02.747017    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:02.747044    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:02.747058    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:02.747068    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:02.747086    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:02.747110    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:02.747120    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:02.747128    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:02.747136    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:02.747145    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:02.747154    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:02.747167    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:02.747183    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:04.749095    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 15
	I1003 21:07:04.749109    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:04.749209    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:04.750167    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:04.750229    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:04.750243    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:04.750262    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:04.750271    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:04.750280    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:04.750288    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:04.750312    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:04.750326    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:04.750335    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:04.750351    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:04.750363    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:04.750373    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:04.750379    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:04.750386    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:04.750399    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:04.750410    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:04.750418    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:04.750425    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:06.750890    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 16
	I1003 21:07:06.750905    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:06.750972    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:06.751880    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:06.751936    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:06.751950    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:06.751971    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:06.751979    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:06.752006    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:06.752020    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:06.752031    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:06.752047    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:06.752059    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:06.752072    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:06.752082    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:06.752095    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:06.752108    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:06.752115    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:06.752122    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:06.752129    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:06.752136    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:06.752151    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:08.752088    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 17
	I1003 21:07:08.752099    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:08.752222    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:08.753092    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:08.753154    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:08.753164    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:08.753188    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:08.753197    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:08.753204    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:08.753212    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:08.753220    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:08.753229    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:08.753237    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:08.753245    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:08.753251    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:08.753263    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:08.753276    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:08.753284    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:08.753296    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:08.753303    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:08.753311    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:08.753319    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:10.755398    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 18
	I1003 21:07:10.755413    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:10.755507    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:10.756582    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:10.756629    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:10.756640    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:10.756656    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:10.756670    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:10.756689    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:10.756702    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:10.756710    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:10.756718    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:10.756724    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:10.756732    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:10.756744    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:10.756758    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:10.756766    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:10.756773    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:10.756787    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:10.756798    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:10.756814    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:10.756821    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:12.758789    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 19
	I1003 21:07:12.758804    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:12.758908    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:12.760007    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:12.760061    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:12.760076    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:12.760103    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:12.760128    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:12.760137    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:12.760146    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:12.760152    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:12.760158    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:12.760163    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:12.760179    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:12.760193    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:12.760201    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:12.760208    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:12.760215    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:12.760222    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:12.760231    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:12.760238    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:12.760246    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:14.761702    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 20
	I1003 21:07:14.761715    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:14.761806    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:14.762721    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:14.762772    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:14.762781    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:14.762798    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:14.762810    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:14.762818    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:14.762826    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:14.762832    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:14.762837    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:14.762843    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:14.762849    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:14.762855    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:14.762863    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:14.762877    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:14.762886    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:14.762894    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:14.762900    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:14.762906    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:14.762913    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:16.763292    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 21
	I1003 21:07:16.763307    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:16.763376    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:16.764589    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:16.764616    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:16.764634    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:16.764646    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:16.764653    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:16.764661    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:16.764669    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:16.764678    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:16.764685    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:16.764692    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:16.764706    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:16.764722    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:16.764734    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:16.764745    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:16.764753    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:16.764761    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:16.764775    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:16.764783    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:16.764791    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:18.766127    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 22
	I1003 21:07:18.766138    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:18.766190    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:18.767076    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:18.767128    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:18.767140    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:18.767158    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:18.767169    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:18.767177    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:18.767184    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:18.767190    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:18.767205    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:18.767218    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:18.767226    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:18.767237    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:18.767244    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:18.767251    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:18.767258    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:18.767269    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:18.767277    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:18.767283    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:18.767292    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:20.769379    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 23
	I1003 21:07:20.769392    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:20.769509    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:20.770402    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:20.770461    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:20.770471    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:20.770481    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:20.770499    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:20.770507    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:20.770514    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:20.770520    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:20.770528    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:20.770535    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:20.770541    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:20.770556    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:20.770568    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:20.770575    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:20.770581    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:20.770587    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:20.770593    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:20.770601    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:20.770609    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:22.772612    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 24
	I1003 21:07:22.772625    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:22.772762    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:22.773677    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:22.773725    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:22.773736    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:22.773751    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:22.773762    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:22.773768    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:22.773774    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:22.773788    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:22.773796    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:22.773803    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:22.773810    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:22.773825    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:22.773837    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:22.773845    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:22.773853    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:22.773865    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:22.773873    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:22.773880    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:22.773890    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:24.775887    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 25
	I1003 21:07:24.775906    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:24.775975    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:24.776920    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:24.777029    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:24.777059    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:24.777070    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:24.777077    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:24.777086    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:24.777098    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:24.777107    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:24.777140    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:24.777153    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:24.777160    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:24.777165    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:24.777170    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:24.777175    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:24.777189    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:24.777207    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:24.777215    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:24.777222    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:24.777231    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:26.777872    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 26
	I1003 21:07:26.777884    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:26.777997    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:26.778906    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:26.778953    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:26.778964    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:26.778975    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:26.778981    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:26.778988    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:26.778994    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:26.779006    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:26.779018    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:26.779030    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:26.779038    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:26.779047    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:26.779055    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:26.779066    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:26.779077    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:26.779088    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:26.779096    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:26.779103    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:26.779109    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:28.779137    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 27
	I1003 21:07:28.779148    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:28.779261    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:28.780164    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:28.780207    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:28.780221    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:28.780235    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:28.780245    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:28.780253    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:28.780259    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:28.780265    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:28.780272    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:28.780279    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:28.780285    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:28.780295    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:28.780305    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:28.780312    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:28.780320    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:28.780332    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:28.780340    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:28.780347    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:28.780354    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:30.780826    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 28
	I1003 21:07:30.780838    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:30.780925    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:30.781801    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:30.781872    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:30.781884    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:30.781891    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:30.781897    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:30.781907    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:30.781917    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:30.781935    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:30.781944    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:30.781951    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:30.781958    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:30.781973    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:30.781985    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:30.781996    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:30.782003    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:30.782013    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:30.782021    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:30.782038    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:30.782048    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:32.782831    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Attempt 29
	I1003 21:07:32.782848    6686 main.go:141] libmachine: (offline-docker-463000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:32.782973    6686 main.go:141] libmachine: (offline-docker-463000) DBG | hyperkit pid from json: 6885
	I1003 21:07:32.783915    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Searching for ae:b0:64:ad:14:84 in /var/db/dhcpd_leases ...
	I1003 21:07:32.783959    6686 main.go:141] libmachine: (offline-docker-463000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:32.783974    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:32.783982    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:32.783988    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:32.783995    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:32.784001    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:32.784008    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:32.784014    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:32.784020    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:32.784025    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:32.784047    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:32.784057    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:32.784063    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:32.784070    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:32.784076    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:32.784083    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:32.784092    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:32.784101    6686 main.go:141] libmachine: (offline-docker-463000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:34.786325    6686 client.go:171] duration metric: took 1m0.953244538s to LocalClient.Create
	I1003 21:07:36.786530    6686 start.go:128] duration metric: took 1m2.98765591s to createHost
	I1003 21:07:36.786556    6686 start.go:83] releasing machines lock for "offline-docker-463000", held for 1m2.987742378s
	W1003 21:07:36.786657    6686 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-463000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:b0:64:ad:14:84
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-463000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:b0:64:ad:14:84
	I1003 21:07:36.828861    6686 out.go:201] 
	W1003 21:07:36.849792    6686 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:b0:64:ad:14:84
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:b0:64:ad:14:84
	W1003 21:07:36.849804    6686 out.go:270] * 
	* 
	W1003 21:07:36.850461    6686 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 21:07:36.912997    6686 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-463000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-03 21:07:37.024647 -0700 PDT m=+4798.158505775
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-463000 -n offline-docker-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-463000 -n offline-docker-463000: exit status 7 (92.572634ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:07:37.115235    6898 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:07:37.115256    6898 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-463000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-463000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-463000: (5.266056098s)
--- FAIL: TestOffline (195.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (480.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:615: (dbg) Run:  kubectl --context addons-675000 create -f testdata/busybox.yaml
addons_test.go:622: (dbg) Run:  kubectl --context addons-675000 create sa gcp-auth-test
addons_test.go:628: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a93eaab-8280-4a4e-9656-a605b002e31b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:628: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:628: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-675000 -n addons-675000
addons_test.go:628: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-03 20:01:42.556394 -0700 PDT m=+843.762593351
addons_test.go:628: (dbg) Run:  kubectl --context addons-675000 describe po busybox -n default
addons_test.go:628: (dbg) kubectl --context addons-675000 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-675000/192.169.0.2
Start Time:       Thu, 03 Oct 2024 19:53:42 -0700
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8c8zl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8c8zl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-675000
Normal   Pulling    6m27s (x4 over 7m59s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m26s (x4 over 7m58s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning  Failed     6m26s (x4 over 7m58s)   kubelet            Error: ErrImagePull
Warning  Failed     6m14s (x6 over 7m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m51s (x21 over 7m58s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:628: (dbg) Run:  kubectl --context addons-675000 logs busybox -n default
addons_test.go:628: (dbg) Non-zero exit: kubectl --context addons-675000 logs busybox -n default: exit status 1 (66.000619ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:628: kubectl --context addons-675000 logs busybox -n default: exit status 1
addons_test.go:630: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.49s)

                                                
                                    
x
+
TestCertOptions (251.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-704000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E1003 21:13:02.028277    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:14:10.983746    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:14:25.114692    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:14:38.694336    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:15:10.911332    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-704000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.020928205s)

                                                
                                                
-- stdout --
	* [cert-options-704000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-704000" primary control-plane node in "cert-options-704000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-704000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:2e:a9:2e:55:f1
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-704000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:e3:bd:a1:d9:cc
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:e3:bd:a1:d9:cc
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-704000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-704000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-704000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (173.00036ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-704000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-704000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-704000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-704000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-704000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (178.618493ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-704000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-704000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-704000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-03 21:17:03.895049 -0700 PDT m=+5365.007575932
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-704000 -n cert-options-704000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-704000 -n cert-options-704000: exit status 7 (89.156143ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:17:03.982605    7261 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:17:03.982627    7261 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-704000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-704000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-704000: (5.24769548s)
--- FAIL: TestCertOptions (251.76s)

                                                
                                    
x
+
TestCertExpiration (1776.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-553000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E1003 21:11:54.852877    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-553000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.322395559s)

                                                
                                                
-- stdout --
	* [cert-expiration-553000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-553000" primary control-plane node in "cert-expiration-553000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-553000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:5:2a:4a:ec:be
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-553000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:b9:c2:68:b5:e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:b9:c2:68:b5:e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-553000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-553000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-553000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (22m24.861665308s)

                                                
                                                
-- stdout --
	* [cert-expiration-553000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-553000" primary control-plane node in "cert-expiration-553000" cluster
	* Updating the running hyperkit "cert-expiration-553000" VM ...
	* Updating the running hyperkit "cert-expiration-553000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-553000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-553000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-553000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-553000" primary control-plane node in "cert-expiration-553000" cluster
	* Updating the running hyperkit "cert-expiration-553000" VM ...
	* Updating the running hyperkit "cert-expiration-553000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-553000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-03 21:41:25.520987 -0700 PDT m=+6826.581792570
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-553000 -n cert-expiration-553000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-553000 -n cert-expiration-553000: exit status 7 (99.816451ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:41:25.618788    8856 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:41:25.618811    8856 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-553000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-553000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-553000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-553000: (5.250655761s)
--- FAIL: TestCertExpiration (1776.54s)

                                                
                                    
x
+
TestDockerFlags (251.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-297000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E1003 21:09:10.985045    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:10.992089    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:11.003929    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:11.027245    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:11.069418    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:11.151895    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:11.314425    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:11.636138    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:12.278113    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:13.560872    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:16.122369    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:21.244101    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:31.486849    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:09:51.968363    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:10:10.912318    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:10:32.930307    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-297000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.101167714s)

                                                
                                                
-- stdout --
	* [docker-flags-297000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-297000" primary control-plane node in "docker-flags-297000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-297000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 21:08:45.633564    6952 out.go:345] Setting OutFile to fd 1 ...
	I1003 21:08:45.633864    6952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:08:45.633869    6952 out.go:358] Setting ErrFile to fd 2...
	I1003 21:08:45.633873    6952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:08:45.634038    6952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 21:08:45.635656    6952 out.go:352] Setting JSON to false
	I1003 21:08:45.663857    6952 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5895,"bootTime":1728009030,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 21:08:45.663942    6952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 21:08:45.687588    6952 out.go:177] * [docker-flags-297000] minikube v1.34.0 on Darwin 15.0.1
	I1003 21:08:45.729551    6952 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 21:08:45.729602    6952 notify.go:220] Checking for updates...
	I1003 21:08:45.771427    6952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 21:08:45.792555    6952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 21:08:45.813504    6952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 21:08:45.834436    6952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:08:45.855629    6952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 21:08:45.877015    6952 config.go:182] Loaded profile config "force-systemd-flag-603000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 21:08:45.877107    6952 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 21:08:45.908481    6952 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 21:08:45.950509    6952 start.go:297] selected driver: hyperkit
	I1003 21:08:45.950524    6952 start.go:901] validating driver "hyperkit" against <nil>
	I1003 21:08:45.950534    6952 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 21:08:45.955968    6952 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:08:45.956111    6952 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 21:08:45.967039    6952 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 21:08:45.973461    6952 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:08:45.973493    6952 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 21:08:45.973529    6952 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 21:08:45.973769    6952 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1003 21:08:45.973804    6952 cni.go:84] Creating CNI manager for ""
	I1003 21:08:45.973841    6952 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 21:08:45.973847    6952 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 21:08:45.973916    6952 start.go:340] cluster config:
	{Name:docker-flags-297000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 21:08:45.974002    6952 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:08:46.016496    6952 out.go:177] * Starting "docker-flags-297000" primary control-plane node in "docker-flags-297000" cluster
	I1003 21:08:46.037485    6952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 21:08:46.037525    6952 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 21:08:46.037538    6952 cache.go:56] Caching tarball of preloaded images
	I1003 21:08:46.037662    6952 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 21:08:46.037670    6952 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 21:08:46.037745    6952 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/docker-flags-297000/config.json ...
	I1003 21:08:46.037763    6952 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/docker-flags-297000/config.json: {Name:mk9b3bccf4d6a6c53ff65fea0698ba55c1b0b390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 21:08:46.038102    6952 start.go:360] acquireMachinesLock for docker-flags-297000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:09:42.780117    6952 start.go:364] duration metric: took 56.742154635s to acquireMachinesLock for "docker-flags-297000"
	I1003 21:09:42.780155    6952 start.go:93] Provisioning new machine with config: &{Name:docker-flags-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SS
HKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:09:42.780209    6952 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:09:42.801560    6952 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:09:42.801704    6952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:09:42.801764    6952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:09:42.812777    6952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53523
	I1003 21:09:42.813105    6952 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:09:42.813513    6952 main.go:141] libmachine: Using API Version  1
	I1003 21:09:42.813522    6952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:09:42.813757    6952 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:09:42.813886    6952 main.go:141] libmachine: (docker-flags-297000) Calling .GetMachineName
	I1003 21:09:42.814003    6952 main.go:141] libmachine: (docker-flags-297000) Calling .DriverName
	I1003 21:09:42.814110    6952 start.go:159] libmachine.API.Create for "docker-flags-297000" (driver="hyperkit")
	I1003 21:09:42.814135    6952 client.go:168] LocalClient.Create starting
	I1003 21:09:42.814166    6952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:09:42.814224    6952 main.go:141] libmachine: Decoding PEM data...
	I1003 21:09:42.814239    6952 main.go:141] libmachine: Parsing certificate...
	I1003 21:09:42.814298    6952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:09:42.814347    6952 main.go:141] libmachine: Decoding PEM data...
	I1003 21:09:42.814357    6952 main.go:141] libmachine: Parsing certificate...
	I1003 21:09:42.814373    6952 main.go:141] libmachine: Running pre-create checks...
	I1003 21:09:42.814380    6952 main.go:141] libmachine: (docker-flags-297000) Calling .PreCreateCheck
	I1003 21:09:42.814472    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:42.814624    6952 main.go:141] libmachine: (docker-flags-297000) Calling .GetConfigRaw
	I1003 21:09:42.843452    6952 main.go:141] libmachine: Creating machine...
	I1003 21:09:42.843461    6952 main.go:141] libmachine: (docker-flags-297000) Calling .Create
	I1003 21:09:42.843567    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:42.843745    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:09:42.843563    6975 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:09:42.843812    6952 main.go:141] libmachine: (docker-flags-297000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:09:43.066926    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:09:43.066851    6975 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/id_rsa...
	I1003 21:09:43.124075    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:09:43.124005    6975 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/docker-flags-297000.rawdisk...
	I1003 21:09:43.124086    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Writing magic tar header
	I1003 21:09:43.124097    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Writing SSH key tar header
	I1003 21:09:43.124718    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:09:43.124681    6975 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000 ...
	I1003 21:09:43.491009    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:43.491030    6952 main.go:141] libmachine: (docker-flags-297000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/hyperkit.pid
	I1003 21:09:43.491069    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Using UUID fbc13f3e-be1b-4d79-a975-0352f7ea576d
	I1003 21:09:43.516169    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Generated MAC 4a:48:f3:ea:b6:e7
	I1003 21:09:43.516187    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-297000
	I1003 21:09:43.516214    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbc13f3e-be1b-4d79-a975-0352f7ea576d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I1003 21:09:43.516243    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbc13f3e-be1b-4d79-a975-0352f7ea576d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I1003 21:09:43.516275    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fbc13f3e-be1b-4d79-a975-0352f7ea576d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/docker-flags-297000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage,/Users/jenkins/m
inikube-integration/19546-1440/.minikube/machines/docker-flags-297000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-297000"}
	I1003 21:09:43.516307    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fbc13f3e-be1b-4d79-a975-0352f7ea576d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/docker-flags-297000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags
-297000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-297000"
	I1003 21:09:43.516329    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:09:43.519255    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 DEBUG: hyperkit: Pid is 6976
	I1003 21:09:43.519793    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 0
	I1003 21:09:43.519809    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:43.519930    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:43.520975    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:43.521009    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:43.521028    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:43.521044    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:43.521054    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:43.521063    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:43.521072    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:43.521083    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:43.521092    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:43.521099    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:43.521108    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:43.521133    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:43.521149    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:43.521158    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:43.521167    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:43.521178    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:43.521190    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:43.521202    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:43.521218    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:43.529407    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:09:43.537847    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:09:43.539000    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:09:43.539040    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:09:43.539066    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:09:43.539079    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:09:43.927031    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:09:43.927047    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:09:44.042280    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:09:44.042297    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:09:44.042308    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:09:44.042315    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:09:44.043191    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:09:44.043209    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:09:45.522852    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 1
	I1003 21:09:45.522869    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:45.523009    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:45.523887    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:45.523942    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:45.523957    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:45.523976    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:45.523985    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:45.523992    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:45.523997    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:45.524017    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:45.524027    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:45.524035    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:45.524043    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:45.524049    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:45.524056    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:45.524089    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:45.524098    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:45.524105    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:45.524118    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:45.524126    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:45.524134    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:47.526117    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 2
	I1003 21:09:47.526131    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:47.526165    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:47.527067    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:47.527125    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:47.527137    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:47.527153    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:47.527164    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:47.527171    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:47.527177    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:47.527192    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:47.527199    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:47.527204    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:47.527209    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:47.527215    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:47.527221    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:47.527236    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:47.527249    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:47.527259    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:47.527267    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:47.527274    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:47.527279    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:49.410626    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:49 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 21:09:49.410768    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:49 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 21:09:49.410776    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:49 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 21:09:49.431061    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:09:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 21:09:49.529397    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 3
	I1003 21:09:49.529419    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:49.529664    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:49.531255    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:49.531492    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:49.531522    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:49.531545    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:49.531563    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:49.531573    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:49.531585    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:49.531598    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:49.531611    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:49.531620    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:49.531627    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:49.531659    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:49.531678    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:49.531692    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:49.531718    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:49.531728    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:49.531739    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:49.531753    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:49.531772    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:51.532349    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 4
	I1003 21:09:51.532365    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:51.532456    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:51.533325    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:51.533393    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:51.533403    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:51.533425    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:51.533437    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:51.533446    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:51.533452    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:51.533470    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:51.533484    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:51.533492    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:51.533499    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:51.533511    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:51.533519    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:51.533533    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:51.533544    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:51.533564    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:51.533573    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:51.533598    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:51.533610    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:53.535604    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 5
	I1003 21:09:53.535622    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:53.535679    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:53.536695    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:53.536708    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:53.536718    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:53.536724    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:53.536732    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:53.536740    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:53.536747    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:53.536753    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:53.536763    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:53.536778    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:53.536786    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:53.536793    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:53.536802    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:53.536808    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:53.536815    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:53.536831    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:53.536843    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:53.536851    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:53.536870    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:55.538881    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 6
	I1003 21:09:55.538895    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:55.538932    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:55.539822    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:55.539883    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:55.539893    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:55.539901    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:55.539921    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:55.539930    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:55.539937    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:55.539945    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:55.539959    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:55.539970    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:55.539977    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:55.539983    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:55.539989    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:55.539998    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:55.540005    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:55.540010    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:55.540016    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:55.540023    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:55.540031    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:57.540253    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 7
	I1003 21:09:57.540264    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:57.540323    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:57.541203    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:57.541256    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:57.541268    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:57.541277    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:57.541283    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:57.541298    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:57.541306    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:57.541312    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:57.541318    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:57.541324    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:57.541333    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:57.541349    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:57.541361    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:57.541368    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:57.541375    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:57.541384    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:57.541391    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:57.541402    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:57.541410    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:59.541496    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 8
	I1003 21:09:59.541510    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:59.541624    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:09:59.542501    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:09:59.542563    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:59.542575    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:59.542583    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:59.542594    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:59.542600    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:59.542605    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:59.542623    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:59.542633    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:59.542651    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:59.542659    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:59.542667    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:59.542681    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:59.542694    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:59.542703    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:59.542711    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:59.542718    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:59.542725    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:59.542732    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:01.542722    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 9
	I1003 21:10:01.542737    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:01.542837    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:01.543713    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:01.543752    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:01.543761    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:01.543770    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:01.543775    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:01.543794    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:01.543807    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:01.543815    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:01.543824    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:01.543833    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:01.543842    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:01.543848    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:01.543855    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:01.543866    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:01.543882    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:01.543910    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:01.543923    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:01.543940    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:01.543949    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:03.545950    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 10
	I1003 21:10:03.545964    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:03.546061    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:03.547061    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:03.547082    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:03.547097    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:03.547108    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:03.547117    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:03.547123    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:03.547128    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:03.547134    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:03.547141    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:03.547148    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:03.547164    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:03.547172    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:03.547179    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:03.547191    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:03.547212    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:03.547222    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:03.547236    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:03.547255    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:03.547272    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:05.549098    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 11
	I1003 21:10:05.549126    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:05.549189    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:05.550123    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:05.550173    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:05.550184    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:05.550192    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:05.550198    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:05.550204    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:05.550210    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:05.550230    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:05.550245    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:05.550255    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:05.550262    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:05.550287    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:05.550298    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:05.550306    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:05.550314    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:05.550320    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:05.550329    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:05.550337    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:05.550344    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:07.551256    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 12
	I1003 21:10:07.551269    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:07.551283    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:07.552165    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:07.552229    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:07.552241    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:07.552248    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:07.552254    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:07.552261    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:07.552277    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:07.552289    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:07.552300    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:07.552308    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:07.552315    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:07.552322    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:07.552328    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:07.552334    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:07.552339    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:07.552345    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:07.552353    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:07.552365    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:07.552373    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:09.553224    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 13
	I1003 21:10:09.553239    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:09.553333    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:09.554272    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:09.554343    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:09.554353    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:09.554362    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:09.554368    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:09.554374    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:09.554381    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:09.554389    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:09.554395    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:09.554401    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:09.554409    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:09.554416    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:09.554423    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:09.554429    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:09.554435    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:09.554442    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:09.554447    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:09.554453    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:09.554461    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:11.554507    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 14
	I1003 21:10:11.554526    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:11.554620    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:11.555510    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:11.555556    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:11.555564    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:11.555572    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:11.555588    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:11.555597    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:11.555620    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:11.555632    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:11.555645    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:11.555661    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:11.555685    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:11.555701    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:11.555716    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:11.555732    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:11.555744    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:11.555752    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:11.555759    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:11.555766    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:11.555773    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:13.557789    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 15
	I1003 21:10:13.557805    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:13.557875    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:13.558941    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:13.559003    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:13.559019    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:13.559029    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:13.559035    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:13.559043    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:13.559049    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:13.559056    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:13.559062    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:13.559067    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:13.559075    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:13.559081    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:13.559087    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:13.559102    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:13.559109    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:13.559115    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:13.559123    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:13.559150    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:13.559163    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:15.560671    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 16
	I1003 21:10:15.560688    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:15.560705    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:15.561600    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:15.561643    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:15.561654    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:15.561666    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:15.561673    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:15.561679    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:15.561685    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:15.561691    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:15.561698    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:15.561704    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:15.561711    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:15.561726    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:15.561740    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:15.561747    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:15.561755    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:15.561771    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:15.561783    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:15.561798    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:15.561806    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:17.563713    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 17
	I1003 21:10:17.563726    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:17.563847    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:17.564784    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:17.564834    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:17.564843    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:17.564851    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:17.564856    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:17.564862    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:17.564871    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:17.564913    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:17.564942    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:17.564948    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:17.564958    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:17.564967    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:17.564973    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:17.564979    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:17.564990    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:17.565005    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:17.565014    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:17.565022    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:17.565033    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:19.566998    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 18
	I1003 21:10:19.567010    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:19.567146    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:19.568020    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:19.568074    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:19.568089    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:19.568102    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:19.568112    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:19.568126    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:19.568139    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:19.568147    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:19.568155    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:19.568166    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:19.568179    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:19.568188    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:19.568196    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:19.568202    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:19.568209    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:19.568227    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:19.568241    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:19.568249    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:19.568257    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:21.570277    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 19
	I1003 21:10:21.570289    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:21.570402    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:21.571391    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:21.571456    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:21.571473    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:21.571486    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:21.571496    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:21.571504    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:21.571515    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:21.571525    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:21.571535    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:21.571554    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:21.571563    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:21.571569    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:21.571574    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:21.571586    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:21.571594    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:21.571601    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:21.571607    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:21.571622    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:21.571633    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:23.572731    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 20
	I1003 21:10:23.572745    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:23.572885    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:23.573781    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:23.573822    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:23.573830    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:23.573840    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:23.573846    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:23.573852    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:23.573858    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:23.573864    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:23.573870    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:23.573893    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:23.573905    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:23.573913    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:23.573922    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:23.573929    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:23.573939    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:23.573966    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:23.573980    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:23.573999    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:23.574012    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:25.574254    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 21
	I1003 21:10:25.574270    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:25.574374    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:25.575306    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:25.575356    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:25.575366    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:25.575376    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:25.575397    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:25.575412    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:25.575422    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:25.575428    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:25.575441    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:25.575450    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:25.575458    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:25.575464    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:25.575471    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:25.575478    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:25.575485    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:25.575491    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:25.575498    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:25.575506    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:25.575511    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:27.576967    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 22
	I1003 21:10:27.576981    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:27.577115    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:27.578010    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:27.578082    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:27.578092    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:27.578103    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:27.578109    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:27.578115    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:27.578128    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:27.578139    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:27.578147    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:27.578160    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:27.578174    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:27.578181    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:27.578189    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:27.578195    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:27.578200    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:27.578215    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:27.578229    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:27.578239    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:27.578246    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:29.578909    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 23
	I1003 21:10:29.578929    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:29.579098    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:29.580114    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:29.580151    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:29.580161    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:29.580171    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:29.580188    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:29.580196    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:29.580202    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:29.580209    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:29.580216    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:29.580224    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:29.580231    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:29.580240    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:29.580246    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:29.580253    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:29.580269    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:29.580283    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:29.580293    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:29.580307    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:29.580319    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:31.581641    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 24
	I1003 21:10:31.581654    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:31.581778    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:31.582660    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:31.582671    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:31.582696    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:31.582712    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:31.582722    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:31.582728    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:31.582735    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:31.582740    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:31.582765    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:31.582777    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:31.582784    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:31.582792    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:31.582798    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:31.582804    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:31.582819    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:31.582830    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:31.582840    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:31.582851    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:31.582861    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:33.583045    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 25
	I1003 21:10:33.583060    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:33.583175    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:33.584034    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:33.584102    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:33.584113    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:33.584119    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:33.584125    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:33.584150    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:33.584164    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:33.584172    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:33.584179    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:33.584192    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:33.584204    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:33.584212    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:33.584217    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:33.584245    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:33.584255    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:33.584265    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:33.584273    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:33.584285    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:33.584293    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:35.586249    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 26
	I1003 21:10:35.586262    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:35.586353    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:35.587500    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:35.587552    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:35.587564    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:35.587572    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:35.587582    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:35.587594    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:35.587613    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:35.587622    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:35.587627    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:35.587634    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:35.587639    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:35.587653    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:35.587665    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:35.587672    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:35.587678    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:35.587684    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:35.587690    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:35.587696    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:35.587703    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:37.589739    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 27
	I1003 21:10:37.589752    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:37.589816    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:37.590693    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:37.590752    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:37.590763    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:37.590810    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:37.590819    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:37.590826    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:37.590833    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:37.590842    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:37.590850    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:37.590861    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:37.590869    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:37.590883    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:37.590897    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:37.590918    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:37.590929    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:37.590938    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:37.590943    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:37.590951    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:37.590959    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:39.590940    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 28
	I1003 21:10:39.590957    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:39.591019    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:39.592002    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:39.592131    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:39.592140    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:39.592160    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:39.592176    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:39.592183    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:39.592189    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:39.592195    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:39.592203    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:39.592208    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:39.592215    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:39.592223    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:39.592249    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:39.592260    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:39.592283    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:39.592317    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:39.592329    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:39.592336    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:39.592344    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:41.594322    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 29
	I1003 21:10:41.594337    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:41.594417    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:41.595312    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 4a:48:f3:ea:b6:e7 in /var/db/dhcpd_leases ...
	I1003 21:10:41.595365    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:41.595385    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:41.595394    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:41.595400    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:41.595409    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:41.595420    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:41.595428    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:41.595433    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:41.595440    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:41.595456    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:41.595468    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:41.595487    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:41.595497    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:41.595512    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:41.595526    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:41.595542    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:41.595554    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:41.595564    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:43.596490    6952 client.go:171] duration metric: took 1m0.782513971s to LocalClient.Create
	I1003 21:10:45.596880    6952 start.go:128] duration metric: took 1m2.816758278s to createHost
	I1003 21:10:45.596894    6952 start.go:83] releasing machines lock for "docker-flags-297000", held for 1m2.816939442s
	W1003 21:10:45.596911    6952 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:48:f3:ea:b6:e7
	I1003 21:10:45.597284    6952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:10:45.597309    6952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:10:45.608747    6952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53525
	I1003 21:10:45.609171    6952 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:10:45.609659    6952 main.go:141] libmachine: Using API Version  1
	I1003 21:10:45.609673    6952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:10:45.610005    6952 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:10:45.610408    6952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:10:45.610438    6952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:10:45.621923    6952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53527
	I1003 21:10:45.622342    6952 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:10:45.623078    6952 main.go:141] libmachine: Using API Version  1
	I1003 21:10:45.623094    6952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:10:45.623332    6952 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:10:45.623471    6952 main.go:141] libmachine: (docker-flags-297000) Calling .GetState
	I1003 21:10:45.623568    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:45.623650    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:45.624778    6952 main.go:141] libmachine: (docker-flags-297000) Calling .DriverName
	I1003 21:10:45.646399    6952 out.go:177] * Deleting "docker-flags-297000" in hyperkit ...
	I1003 21:10:45.704257    6952 main.go:141] libmachine: (docker-flags-297000) Calling .Remove
	I1003 21:10:45.704402    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:45.704432    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:45.704473    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:45.705540    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:45.705595    6952 main.go:141] libmachine: (docker-flags-297000) DBG | waiting for graceful shutdown
	I1003 21:10:46.706329    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:46.706484    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:46.707596    6952 main.go:141] libmachine: (docker-flags-297000) DBG | waiting for graceful shutdown
	I1003 21:10:47.708538    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:47.708651    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:47.710409    6952 main.go:141] libmachine: (docker-flags-297000) DBG | waiting for graceful shutdown
	I1003 21:10:48.712369    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:48.712441    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:48.713259    6952 main.go:141] libmachine: (docker-flags-297000) DBG | waiting for graceful shutdown
	I1003 21:10:49.713861    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:49.713945    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:49.714565    6952 main.go:141] libmachine: (docker-flags-297000) DBG | waiting for graceful shutdown
	I1003 21:10:50.716333    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:50.716498    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 6976
	I1003 21:10:50.717148    6952 main.go:141] libmachine: (docker-flags-297000) DBG | sending sigkill
	I1003 21:10:50.717158    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:50.728671    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:10:50 WARN : hyperkit: failed to read stdout: EOF
	I1003 21:10:50.728694    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:10:50 WARN : hyperkit: failed to read stderr: EOF
	W1003 21:10:50.748907    6952 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:48:f3:ea:b6:e7
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:48:f3:ea:b6:e7
	I1003 21:10:50.748926    6952 start.go:729] Will try again in 5 seconds ...
	I1003 21:10:55.749186    6952 start.go:360] acquireMachinesLock for docker-flags-297000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:11:48.475269    6952 start.go:364] duration metric: took 52.726158279s to acquireMachinesLock for "docker-flags-297000"
	I1003 21:11:48.475293    6952 start.go:93] Provisioning new machine with config: &{Name:docker-flags-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SS
HKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:11:48.475366    6952 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:11:48.496560    6952 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:11:48.496661    6952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:11:48.496681    6952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:11:48.508478    6952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53531
	I1003 21:11:48.508844    6952 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:11:48.509236    6952 main.go:141] libmachine: Using API Version  1
	I1003 21:11:48.509264    6952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:11:48.509513    6952 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:11:48.509641    6952 main.go:141] libmachine: (docker-flags-297000) Calling .GetMachineName
	I1003 21:11:48.509733    6952 main.go:141] libmachine: (docker-flags-297000) Calling .DriverName
	I1003 21:11:48.509846    6952 start.go:159] libmachine.API.Create for "docker-flags-297000" (driver="hyperkit")
	I1003 21:11:48.509863    6952 client.go:168] LocalClient.Create starting
	I1003 21:11:48.509915    6952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:11:48.510007    6952 main.go:141] libmachine: Decoding PEM data...
	I1003 21:11:48.510032    6952 main.go:141] libmachine: Parsing certificate...
	I1003 21:11:48.510090    6952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:11:48.510137    6952 main.go:141] libmachine: Decoding PEM data...
	I1003 21:11:48.510149    6952 main.go:141] libmachine: Parsing certificate...
	I1003 21:11:48.510160    6952 main.go:141] libmachine: Running pre-create checks...
	I1003 21:11:48.510166    6952 main.go:141] libmachine: (docker-flags-297000) Calling .PreCreateCheck
	I1003 21:11:48.510245    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:48.510272    6952 main.go:141] libmachine: (docker-flags-297000) Calling .GetConfigRaw
	I1003 21:11:48.517679    6952 main.go:141] libmachine: Creating machine...
	I1003 21:11:48.517688    6952 main.go:141] libmachine: (docker-flags-297000) Calling .Create
	I1003 21:11:48.517788    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:48.517978    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:11:48.517770    7015 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:11:48.517998    6952 main.go:141] libmachine: (docker-flags-297000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:11:48.923719    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:11:48.923619    7015 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/id_rsa...
	I1003 21:11:49.008985    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:11:49.008900    7015 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/docker-flags-297000.rawdisk...
	I1003 21:11:49.008998    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Writing magic tar header
	I1003 21:11:49.009008    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Writing SSH key tar header
	I1003 21:11:49.009398    6952 main.go:141] libmachine: (docker-flags-297000) DBG | I1003 21:11:49.009359    7015 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000 ...
	I1003 21:11:49.379697    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:49.379715    6952 main.go:141] libmachine: (docker-flags-297000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/hyperkit.pid
	I1003 21:11:49.379726    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Using UUID e36c939c-0a98-4390-a6d8-a791198a789e
	I1003 21:11:49.405355    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Generated MAC 8e:58:e2:24:c7:7b
	I1003 21:11:49.405376    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-297000
	I1003 21:11:49.405420    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e36c939c-0a98-4390-a6d8-a791198a789e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I1003 21:11:49.405446    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e36c939c-0a98-4390-a6d8-a791198a789e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I1003 21:11:49.405503    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e36c939c-0a98-4390-a6d8-a791198a789e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/docker-flags-297000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage,/Users/jenkins/m
inikube-integration/19546-1440/.minikube/machines/docker-flags-297000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-297000"}
	I1003 21:11:49.405542    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e36c939c-0a98-4390-a6d8-a791198a789e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/docker-flags-297000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags
-297000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-297000"
	I1003 21:11:49.405556    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:11:49.408438    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 DEBUG: hyperkit: Pid is 7029
	I1003 21:11:49.408951    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 0
	I1003 21:11:49.408964    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:49.409068    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:11:49.410445    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:11:49.410487    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:49.410505    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:49.410521    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:49.410531    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:49.410542    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:49.410553    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:49.410566    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:49.410576    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:49.410588    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:49.410612    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:49.410623    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:49.410641    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:49.410679    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:49.410699    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:49.410710    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:49.410719    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:49.410727    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:49.410736    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:49.418748    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:11:49.427260    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/docker-flags-297000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:11:49.428332    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:11:49.428350    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:11:49.428372    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:11:49.428386    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:11:49.806547    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:11:49.806562    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:11:49.921289    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:11:49.921306    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:11:49.921317    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:11:49.921325    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:11:49.922191    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:11:49.922206    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:11:51.412035    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 1
	I1003 21:11:51.412049    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:51.412123    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:11:51.413032    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:11:51.413081    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:51.413097    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:51.413108    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:51.413115    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:51.413122    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:51.413131    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:51.413142    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:51.413152    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:51.413158    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:51.413166    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:51.413172    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:51.413178    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:51.413191    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:51.413203    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:51.413210    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:51.413218    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:51.413239    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:51.413251    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:53.414149    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 2
	I1003 21:11:53.414166    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:53.414227    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:11:53.415194    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:11:53.415231    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:53.415248    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:53.415256    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:53.415262    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:53.415267    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:53.415291    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:53.415303    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:53.415311    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:53.415320    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:53.415328    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:53.415337    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:53.415351    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:53.415362    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:53.415370    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:53.415383    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:53.415391    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:53.415398    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:53.415412    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:55.271586    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1003 21:11:55.271739    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1003 21:11:55.271749    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1003 21:11:55.291749    6952 main.go:141] libmachine: (docker-flags-297000) DBG | 2024/10/03 21:11:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1003 21:11:55.417532    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 3
	I1003 21:11:55.417559    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:55.417817    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:11:55.419499    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:11:55.419672    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:55.419689    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:55.419699    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:55.419707    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:55.419726    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:55.419740    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:55.419771    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:55.419789    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:55.419799    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:55.419810    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:55.419827    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:55.419841    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:55.419862    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:55.419872    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:55.419882    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:55.419892    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:55.419905    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:55.419916    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:57.419812    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 4
	I1003 21:11:57.419826    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:57.419904    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:11:57.420827    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:11:57.420897    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:57.420907    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:57.420916    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:57.420935    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:57.420944    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:57.420952    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:57.420958    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:57.420975    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:57.420987    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:57.421000    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:57.421014    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:57.421031    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:57.421042    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:57.421053    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:57.421061    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:57.421068    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:57.421074    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:57.421081    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:59.423047    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 5
	I1003 21:11:59.423059    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:59.423101    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:11:59.423991    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:11:59.424054    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:59.424065    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:59.424081    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:59.424090    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:59.424103    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:59.424111    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:59.424133    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:59.424146    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:59.424152    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:59.424158    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:59.424166    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:59.424176    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:59.424182    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:59.424190    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:59.424198    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:59.424204    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:59.424210    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:59.424217    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:01.426265    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 6
	I1003 21:12:01.426278    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:01.426433    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:01.427541    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:01.427597    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:01.427609    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:01.427615    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:01.427621    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:01.427627    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:01.427646    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:01.427667    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:01.427681    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:01.427693    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:01.427700    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:01.427708    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:01.427721    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:01.427730    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:01.427736    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:01.427743    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:01.427749    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:01.427757    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:01.427766    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:03.429810    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 7
	I1003 21:12:03.429824    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:03.429930    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:03.430903    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:03.430929    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:03.430947    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:03.430954    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:03.430961    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:03.430967    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:03.430973    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:03.430979    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:03.431002    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:03.431016    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:03.431038    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:03.431052    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:03.431063    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:03.431073    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:03.431082    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:03.431091    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:03.431098    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:03.431107    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:03.431119    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:05.432160    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 8
	I1003 21:12:05.432183    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:05.432249    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:05.433151    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:05.433203    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:05.433217    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:05.433245    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:05.433261    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:05.433269    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:05.433277    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:05.433291    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:05.433304    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:05.433311    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:05.433317    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:05.433324    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:05.433329    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:05.433343    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:05.433355    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:05.433364    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:05.433372    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:05.433392    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:05.433399    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:07.433428    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 9
	I1003 21:12:07.433444    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:07.433554    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:07.434395    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:07.434432    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:07.434443    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:07.434458    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:07.434467    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:07.434474    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:07.434480    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:07.434486    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:07.434491    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:07.434513    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:07.434528    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:07.434542    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:07.434555    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:07.434563    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:07.434570    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:07.434575    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:07.434583    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:07.434590    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:07.434596    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:09.436586    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 10
	I1003 21:12:09.436606    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:09.436664    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:09.437637    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:09.437696    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:09.437707    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:09.437719    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:09.437727    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:09.437754    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:09.437765    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:09.437775    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:09.437785    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:09.437797    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:09.437806    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:09.437814    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:09.437820    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:09.437828    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:09.437835    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:09.437841    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:09.437851    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:09.437858    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:09.437864    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:11.438196    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 11
	I1003 21:12:11.438210    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:11.438321    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:11.439377    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:11.439398    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:11.439413    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:11.439422    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:11.439428    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:11.439433    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:11.439440    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:11.439445    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:11.439460    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:11.439472    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:11.439491    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:11.439499    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:11.439511    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:11.439525    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:11.439538    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:11.439543    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:11.439550    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:11.439567    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:11.439578    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:13.441001    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 12
	I1003 21:12:13.441015    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:13.441122    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:13.442013    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:13.442067    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:13.442080    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:13.442104    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:13.442112    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:13.442118    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:13.442126    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:13.442132    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:13.442139    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:13.442166    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:13.442182    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:13.442194    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:13.442212    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:13.442220    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:13.442227    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:13.442232    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:13.442239    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:13.442247    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:13.442254    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:15.444253    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 13
	I1003 21:12:15.444268    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:15.444376    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:15.445283    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:15.445342    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:15.445362    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:15.445370    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:15.445390    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:15.445404    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:15.445412    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:15.445426    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:15.445448    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:15.445463    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:15.445470    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:15.445480    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:15.445486    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:15.445493    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:15.445500    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:15.445507    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:15.445513    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:15.445520    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:15.445528    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:17.446700    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 14
	I1003 21:12:17.446714    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:17.446836    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:17.447735    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:17.447789    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:17.447806    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:17.447823    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:17.447829    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:17.447843    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:17.447853    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:17.447860    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:17.447869    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:17.447884    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:17.447891    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:17.447899    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:17.447914    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:17.447930    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:17.447942    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:17.447960    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:17.447973    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:17.447984    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:17.447991    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:19.447977    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 15
	I1003 21:12:19.447993    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:19.448047    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:19.448941    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:19.449005    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:19.449015    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:19.449024    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:19.449030    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:19.449036    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:19.449043    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:19.449057    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:19.449069    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:19.449077    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:19.449095    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:19.449104    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:19.449109    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:19.449124    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:19.449135    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:19.449145    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:19.449153    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:19.449159    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:19.449166    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:21.451195    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 16
	I1003 21:12:21.451208    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:21.451338    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:21.452551    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:21.452603    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:21.452610    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:21.452618    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:21.452627    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:21.452634    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:21.452639    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:21.452656    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:21.452670    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:21.452679    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:21.452687    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:21.452705    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:21.452716    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:21.452724    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:21.452738    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:21.452746    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:21.452753    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:21.452761    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:21.452777    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:23.454128    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 17
	I1003 21:12:23.454142    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:23.454282    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:23.455178    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:23.455229    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:23.455237    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:23.455247    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:23.455257    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:23.455264    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:23.455269    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:23.455285    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:23.455293    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:23.455300    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:23.455308    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:23.455320    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:23.455327    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:23.455334    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:23.455341    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:23.455347    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:23.455354    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:23.455360    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:23.455368    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:25.455983    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 18
	I1003 21:12:25.455996    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:25.456099    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:25.456934    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:25.456999    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:25.457010    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:25.457017    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:25.457023    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:25.457033    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:25.457038    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:25.457046    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:25.457055    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:25.457066    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:25.457077    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:25.457089    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:25.457097    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:25.457104    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:25.457111    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:25.457117    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:25.457124    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:25.457139    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:25.457161    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:27.458051    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 19
	I1003 21:12:27.458064    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:27.458195    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:27.459146    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:27.459197    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:27.459209    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:27.459224    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:27.459230    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:27.459236    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:27.459249    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:27.459255    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:27.459261    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:27.459268    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:27.459294    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:27.459308    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:27.459321    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:27.459329    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:27.459342    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:27.459353    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:27.459360    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:27.459365    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:27.459381    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:29.459505    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 20
	I1003 21:12:29.459520    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:29.459583    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:29.460439    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:29.460504    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:29.460527    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:29.460537    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:29.460544    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:29.460550    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:29.460566    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:29.460574    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:29.460583    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:29.460598    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:29.460606    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:29.460622    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:29.460629    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:29.460636    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:29.460652    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:29.460663    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:29.460670    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:29.460678    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:29.460687    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:31.460755    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 21
	I1003 21:12:31.460767    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:31.460848    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:31.461851    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:31.461896    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:31.461909    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:31.461918    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:31.461929    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:31.461937    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:31.461944    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:31.461950    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:31.461962    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:31.461969    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:31.461975    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:31.461982    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:31.461990    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:31.461997    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:31.462015    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:31.462026    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:31.462042    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:31.462054    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:31.462063    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:33.462553    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 22
	I1003 21:12:33.462568    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:33.462659    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:33.463578    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:33.463635    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:33.463660    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:33.463673    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:33.463679    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:33.463685    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:33.463693    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:33.463701    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:33.463708    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:33.463714    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:33.463721    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:33.463728    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:33.463750    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:33.463761    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:33.463768    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:33.463781    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:33.463792    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:33.463806    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:33.463818    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:35.465831    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 23
	I1003 21:12:35.465846    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:35.465959    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:35.466910    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:35.466966    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:35.466976    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:35.466985    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:35.466992    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:35.467000    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:35.467006    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:35.467019    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:35.467032    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:35.467039    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:35.467045    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:35.467050    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:35.467064    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:35.467074    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:35.467081    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:35.467086    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:35.467091    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:35.467096    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:35.467116    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:37.469109    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 24
	I1003 21:12:37.469121    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:37.469229    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:37.470102    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:37.470176    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:37.470207    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:37.470214    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:37.470220    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:37.470226    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:37.470231    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:37.470237    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:37.470245    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:37.470251    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:37.470261    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:37.470268    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:37.470274    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:37.470279    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:37.470285    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:37.470299    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:37.470315    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:37.470326    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:37.470336    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:39.471220    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 25
	I1003 21:12:39.471232    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:39.471373    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:39.472308    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:39.472366    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:39.472378    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:39.472390    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:39.472400    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:39.472409    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:39.472418    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:39.472428    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:39.472434    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:39.472441    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:39.472449    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:39.472455    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:39.472462    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:39.472482    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:39.472493    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:39.472513    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:39.472527    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:39.472539    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:39.472546    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:41.472575    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 26
	I1003 21:12:41.472588    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:41.472680    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:41.473547    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:41.473610    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:41.473628    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:41.473637    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:41.473647    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:41.473660    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:41.473667    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:41.473684    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:41.473696    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:41.473706    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:41.473717    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:41.473726    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:41.473733    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:41.473743    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:41.473751    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:41.473761    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:41.473771    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:41.473780    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:41.473787    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:43.475704    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 27
	I1003 21:12:43.475718    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:43.475817    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:43.476722    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:43.476780    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:43.476803    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:43.476813    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:43.476819    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:43.476825    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:43.476831    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:43.476837    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:43.476845    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:43.476851    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:43.476858    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:43.476865    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:43.476870    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:43.476877    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:43.476883    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:43.476891    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:43.476901    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:43.476907    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:43.476914    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:45.477403    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 28
	I1003 21:12:45.477418    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:45.477513    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:45.478392    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:45.478437    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:45.478448    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:45.478490    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:45.478502    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:45.478509    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:45.478515    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:45.478529    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:45.478541    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:45.478549    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:45.478557    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:45.478563    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:45.478570    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:45.478576    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:45.478583    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:45.478598    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:45.478610    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:45.478623    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:45.478629    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:47.480105    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Attempt 29
	I1003 21:12:47.480125    6952 main.go:141] libmachine: (docker-flags-297000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:12:47.480256    6952 main.go:141] libmachine: (docker-flags-297000) DBG | hyperkit pid from json: 7029
	I1003 21:12:47.481409    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Searching for 8e:58:e2:24:c7:7b in /var/db/dhcpd_leases ...
	I1003 21:12:47.481463    6952 main.go:141] libmachine: (docker-flags-297000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:12:47.481476    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:12:47.481484    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:12:47.481492    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:12:47.481499    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:12:47.481506    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:12:47.481512    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:12:47.481518    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:12:47.481566    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:12:47.481576    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:12:47.481583    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:12:47.481588    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:12:47.481599    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:12:47.481606    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:12:47.481611    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:12:47.481619    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:12:47.481626    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:12:47.481645    6952 main.go:141] libmachine: (docker-flags-297000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:12:49.481864    6952 client.go:171] duration metric: took 1m0.972161373s to LocalClient.Create
	I1003 21:12:51.483954    6952 start.go:128] duration metric: took 1m3.008745701s to createHost
	I1003 21:12:51.483972    6952 start.go:83] releasing machines lock for "docker-flags-297000", held for 1m3.008867119s
	W1003 21:12:51.484136    6952 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-297000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:58:e2:24:c7:7b
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-297000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:58:e2:24:c7:7b
	I1003 21:12:51.547179    6952 out.go:201] 
	W1003 21:12:51.568408    6952 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:58:e2:24:c7:7b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:58:e2:24:c7:7b
	W1003 21:12:51.568422    6952 out.go:270] * 
	* 
	W1003 21:12:51.569076    6952 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 21:12:51.631407    6952 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-297000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (190.870005ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-297000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-297000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (180.56917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-297000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-297000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-297000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-03 21:12:52.113277 -0700 PDT m=+5113.248009919
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-297000 -n docker-flags-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-297000 -n docker-flags-297000: exit status 7 (89.141823ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:12:52.200485    7060 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:12:52.200508    7060 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-297000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-297000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-297000: (5.250988862s)
--- FAIL: TestDockerFlags (251.88s)

                                                
                                    
x
+
TestForceSystemdFlag (251.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-603000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E1003 21:08:02.029765    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-603000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.241752502s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-603000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-603000" primary control-plane node in "force-systemd-flag-603000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-603000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 21:07:42.442596    6911 out.go:345] Setting OutFile to fd 1 ...
	I1003 21:07:42.442912    6911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:07:42.442917    6911 out.go:358] Setting ErrFile to fd 2...
	I1003 21:07:42.442921    6911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:07:42.443100    6911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 21:07:42.444738    6911 out.go:352] Setting JSON to false
	I1003 21:07:42.473098    6911 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5832,"bootTime":1728009030,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 21:07:42.473281    6911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 21:07:42.495099    6911 out.go:177] * [force-systemd-flag-603000] minikube v1.34.0 on Darwin 15.0.1
	I1003 21:07:42.516936    6911 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 21:07:42.516961    6911 notify.go:220] Checking for updates...
	I1003 21:07:42.560872    6911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 21:07:42.581813    6911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 21:07:42.602937    6911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 21:07:42.623951    6911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:07:42.644923    6911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 21:07:42.672643    6911 config.go:182] Loaded profile config "force-systemd-env-966000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 21:07:42.672735    6911 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 21:07:42.703696    6911 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 21:07:42.724707    6911 start.go:297] selected driver: hyperkit
	I1003 21:07:42.724718    6911 start.go:901] validating driver "hyperkit" against <nil>
	I1003 21:07:42.724728    6911 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 21:07:42.730158    6911 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:07:42.730304    6911 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 21:07:42.741227    6911 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 21:07:42.747581    6911 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:07:42.747631    6911 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 21:07:42.747670    6911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 21:07:42.747908    6911 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 21:07:42.747937    6911 cni.go:84] Creating CNI manager for ""
	I1003 21:07:42.747976    6911 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 21:07:42.747983    6911 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 21:07:42.748049    6911 start.go:340] cluster config:
	{Name:force-systemd-flag-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 21:07:42.748144    6911 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:07:42.790775    6911 out.go:177] * Starting "force-systemd-flag-603000" primary control-plane node in "force-systemd-flag-603000" cluster
	I1003 21:07:42.813543    6911 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 21:07:42.813581    6911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 21:07:42.813595    6911 cache.go:56] Caching tarball of preloaded images
	I1003 21:07:42.813725    6911 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 21:07:42.813734    6911 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 21:07:42.813808    6911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/force-systemd-flag-603000/config.json ...
	I1003 21:07:42.813827    6911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/force-systemd-flag-603000/config.json: {Name:mk280756c14ec284c84e59b09b31f33125fdada4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 21:07:42.814171    6911 start.go:360] acquireMachinesLock for force-systemd-flag-603000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:08:39.755318    6911 start.go:364] duration metric: took 56.941289206s to acquireMachinesLock for "force-systemd-flag-603000"
	I1003 21:08:39.755363    6911 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:08:39.755422    6911 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:08:39.797655    6911 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:08:39.797784    6911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:08:39.797815    6911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:08:39.808939    6911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53503
	I1003 21:08:39.809284    6911 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:08:39.809702    6911 main.go:141] libmachine: Using API Version  1
	I1003 21:08:39.809712    6911 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:08:39.809936    6911 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:08:39.810062    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .GetMachineName
	I1003 21:08:39.810150    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .DriverName
	I1003 21:08:39.810284    6911 start.go:159] libmachine.API.Create for "force-systemd-flag-603000" (driver="hyperkit")
	I1003 21:08:39.810306    6911 client.go:168] LocalClient.Create starting
	I1003 21:08:39.810344    6911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:08:39.810406    6911 main.go:141] libmachine: Decoding PEM data...
	I1003 21:08:39.810419    6911 main.go:141] libmachine: Parsing certificate...
	I1003 21:08:39.810475    6911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:08:39.810523    6911 main.go:141] libmachine: Decoding PEM data...
	I1003 21:08:39.810531    6911 main.go:141] libmachine: Parsing certificate...
	I1003 21:08:39.810541    6911 main.go:141] libmachine: Running pre-create checks...
	I1003 21:08:39.810553    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .PreCreateCheck
	I1003 21:08:39.810638    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:39.810813    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .GetConfigRaw
	I1003 21:08:39.818884    6911 main.go:141] libmachine: Creating machine...
	I1003 21:08:39.818894    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .Create
	I1003 21:08:39.818991    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:39.819204    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:08:39.818993    6936 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:08:39.819226    6911 main.go:141] libmachine: (force-systemd-flag-603000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:08:40.241288    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:08:40.241217    6936 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/id_rsa...
	I1003 21:08:40.301367    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:08:40.301308    6936 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/force-systemd-flag-603000.rawdisk...
	I1003 21:08:40.301382    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Writing magic tar header
	I1003 21:08:40.301394    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Writing SSH key tar header
	I1003 21:08:40.301758    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:08:40.301718    6936 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000 ...
	I1003 21:08:40.670391    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:40.670408    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/hyperkit.pid
	I1003 21:08:40.670460    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Using UUID 1555fe27-cd71-47da-935c-14a000dd46e5
	I1003 21:08:40.695929    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Generated MAC 42:9f:b:fd:34:ae
	I1003 21:08:40.695952    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-603000
	I1003 21:08:40.695999    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1555fe27-cd71-47da-935c-14a000dd46e5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:08:40.696037    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1555fe27-cd71-47da-935c-14a000dd46e5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:08:40.696086    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1555fe27-cd71-47da-935c-14a000dd46e5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/force-systemd-flag-603000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/fo
rce-systemd-flag-603000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-603000"}
	I1003 21:08:40.696135    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1555fe27-cd71-47da-935c-14a000dd46e5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/force-systemd-flag-603000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/bzimage,/Users/jenkins/minikube-integr
ation/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-603000"
	I1003 21:08:40.696151    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:08:40.699179    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 DEBUG: hyperkit: Pid is 6950
	I1003 21:08:40.699774    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 0
	I1003 21:08:40.699791    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:40.699996    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:40.700997    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:40.701045    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:40.701069    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:40.701081    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:40.701092    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:40.701098    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:40.701109    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:40.701115    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:40.701121    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:40.701128    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:40.701135    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:40.701141    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:40.701147    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:40.701156    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:40.701172    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:40.701185    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:40.701192    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:40.701198    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:40.701219    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:40.709490    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:08:40.718011    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:08:40.718975    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:08:40.718993    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:08:40.719006    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:08:40.719016    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:08:41.096603    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:08:41.096626    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:08:41.211273    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:08:41.211308    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:08:41.211361    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:08:41.211384    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:08:41.212077    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:08:41.212087    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:08:42.703225    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 1
	I1003 21:08:42.703241    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:42.703274    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:42.704174    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:42.704229    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:42.704241    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:42.704253    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:42.704259    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:42.704265    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:42.704271    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:42.704277    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:42.704283    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:42.704290    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:42.704301    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:42.704310    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:42.704325    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:42.704333    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:42.704349    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:42.704363    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:42.704376    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:42.704384    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:42.704391    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:44.705186    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 2
	I1003 21:08:44.705212    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:44.705246    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:44.706183    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:44.706196    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:44.706207    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:44.706228    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:44.706234    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:44.706245    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:44.706252    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:44.706271    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:44.706287    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:44.706295    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:44.706301    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:44.706310    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:44.706324    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:44.706334    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:44.706340    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:44.706348    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:44.706354    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:44.706361    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:44.706369    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:46.564870    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 21:08:46.564996    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 21:08:46.565004    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 21:08:46.584790    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:08:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 21:08:46.706819    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 3
	I1003 21:08:46.706844    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:46.707105    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:46.708736    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:46.708898    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:46.708909    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:46.708931    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:46.708949    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:46.708960    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:46.708972    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:46.708996    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:46.709007    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:46.709031    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:46.709047    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:46.709058    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:46.709068    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:46.709078    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:46.709098    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:46.709108    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:46.709118    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:46.709152    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:46.709167    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:48.709833    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 4
	I1003 21:08:48.709849    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:48.709924    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:48.710872    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:48.710935    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:48.710944    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:48.710958    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:48.710963    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:48.710970    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:48.710975    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:48.710981    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:48.710987    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:48.710995    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:48.711152    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:48.711165    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:48.711175    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:48.711188    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:48.711201    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:48.711221    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:48.711232    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:48.711239    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:48.711245    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:50.713258    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 5
	I1003 21:08:50.713271    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:50.713344    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:50.714654    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:50.714678    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:50.714685    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:50.714694    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:50.714700    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:50.714717    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:50.714729    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:50.714753    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:50.714763    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:50.714770    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:50.714778    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:50.714784    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:50.714790    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:50.714796    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:50.714804    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:50.714810    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:50.714817    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:50.714827    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:50.714837    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:52.716408    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 6
	I1003 21:08:52.716423    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:52.716483    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:52.717344    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:52.717385    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:52.717394    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:52.717402    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:52.717408    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:52.717414    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:52.717422    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:52.717428    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:52.717443    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:52.717457    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:52.717474    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:52.717486    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:52.717494    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:52.717513    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:52.717519    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:52.717526    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:52.717532    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:52.717538    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:52.717545    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:54.718736    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 7
	I1003 21:08:54.718752    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:54.718830    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:54.719708    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:54.719752    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:54.719767    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:54.719781    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:54.719794    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:54.719804    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:54.719811    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:54.719819    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:54.719824    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:54.719831    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:54.719838    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:54.719857    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:54.719869    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:54.719878    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:54.719885    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:54.719911    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:54.719931    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:54.719955    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:54.719965    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:56.720195    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 8
	I1003 21:08:56.720208    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:56.720288    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:56.721229    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:56.721283    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:56.721291    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:56.721301    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:56.721310    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:56.721335    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:56.721351    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:56.721359    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:56.721367    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:56.721373    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:56.721381    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:56.721390    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:56.721397    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:56.721414    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:56.721422    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:56.721431    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:56.721438    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:56.721444    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:56.721450    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:58.723436    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 9
	I1003 21:08:58.723452    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:58.723554    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:08:58.724606    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:08:58.724658    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:58.724668    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:58.724676    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:58.724683    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:58.724703    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:58.724713    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:58.724720    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:58.724737    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:58.724791    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:58.724830    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:58.724859    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:58.724868    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:58.724874    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:58.724880    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:58.724886    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:58.724893    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:58.724911    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:58.724923    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:00.726513    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 10
	I1003 21:09:00.726525    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:00.726603    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:00.727480    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:00.727538    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:00.727547    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:00.727555    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:00.727562    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:00.727574    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:00.727581    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:00.727587    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:00.727595    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:00.727601    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:00.727609    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:00.727614    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:00.727627    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:00.727635    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:00.727642    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:00.727650    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:00.727656    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:00.727663    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:00.727670    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:02.729696    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 11
	I1003 21:09:02.729729    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:02.729805    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:02.730990    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:02.731057    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:02.731066    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:02.731091    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:02.731101    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:02.731109    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:02.731120    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:02.731128    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:02.731135    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:02.731148    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:02.731157    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:02.731165    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:02.731171    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:02.731178    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:02.731194    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:02.731206    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:02.731214    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:02.731220    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:02.731236    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:04.732841    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 12
	I1003 21:09:04.732858    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:04.732979    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:04.733881    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:04.733895    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:04.733918    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:04.733933    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:04.733944    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:04.733954    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:04.733960    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:04.733967    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:04.733976    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:04.733986    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:04.733994    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:04.734001    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:04.734006    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:04.734020    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:04.734033    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:04.734051    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:04.734059    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:04.734068    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:04.734074    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:06.734651    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 13
	I1003 21:09:06.734665    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:06.734730    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:06.735651    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:06.735695    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:06.735706    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:06.735723    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:06.735732    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:06.735740    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:06.735747    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:06.735768    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:06.735779    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:06.735793    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:06.735802    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:06.735813    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:06.735823    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:06.735830    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:06.735838    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:06.735845    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:06.735854    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:06.735860    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:06.735866    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:08.737276    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 14
	I1003 21:09:08.737292    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:08.737418    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:08.738265    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:08.738317    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:08.738327    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:08.738333    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:08.738340    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:08.738354    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:08.738366    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:08.738391    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:08.738401    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:08.738409    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:08.738415    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:08.738422    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:08.738429    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:08.738436    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:08.738442    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:08.738449    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:08.738456    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:08.738474    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:08.738488    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:10.739606    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 15
	I1003 21:09:10.739629    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:10.739682    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:10.740572    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:10.740624    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:10.740638    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:10.740652    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:10.740675    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:10.740687    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:10.740694    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:10.740711    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:10.740721    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:10.740728    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:10.740737    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:10.740747    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:10.740754    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:10.740765    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:10.740773    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:10.740788    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:10.740797    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:10.740804    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:10.740811    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:12.742802    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 16
	I1003 21:09:12.742815    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:12.742851    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:12.743736    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:12.743782    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:12.743790    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:12.743798    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:12.743808    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:12.743820    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:12.743830    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:12.743840    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:12.743849    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:12.743861    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:12.743869    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:12.743877    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:12.743884    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:12.743892    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:12.743909    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:12.743925    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:12.743938    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:12.743955    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:12.743994    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:14.745817    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 17
	I1003 21:09:14.745832    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:14.745894    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:14.746767    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:14.746813    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:14.746821    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:14.746836    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:14.746843    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:14.746853    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:14.746862    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:14.746883    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:14.746903    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:14.746917    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:14.746940    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:14.746948    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:14.746955    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:14.746963    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:14.746970    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:14.746978    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:14.746984    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:14.747001    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:14.747011    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:16.749016    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 18
	I1003 21:09:16.749029    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:16.749076    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:16.750073    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:16.750104    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:16.750112    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:16.750138    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:16.750148    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:16.750154    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:16.750164    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:16.750171    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:16.750178    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:16.750194    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:16.750206    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:16.750215    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:16.750225    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:16.750233    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:16.750242    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:16.750250    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:16.750258    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:16.750269    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:16.750278    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:18.751387    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 19
	I1003 21:09:18.751401    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:18.751483    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:18.752548    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:18.752562    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:18.752567    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:18.752575    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:18.752581    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:18.752587    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:18.752593    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:18.752607    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:18.752620    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:18.752631    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:18.752638    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:18.752645    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:18.752652    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:18.752659    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:18.752665    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:18.752693    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:18.752702    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:18.752709    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:18.752731    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:20.754721    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 20
	I1003 21:09:20.754736    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:20.754780    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:20.755841    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:20.755904    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:20.755917    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:20.755924    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:20.755937    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:20.755953    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:20.755965    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:20.755973    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:20.755984    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:20.755992    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:20.755999    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:20.756006    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:20.756013    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:20.756026    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:20.756034    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:20.756041    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:20.756048    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:20.756064    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:20.756078    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:22.758079    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 21
	I1003 21:09:22.758095    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:22.758142    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:22.759008    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:22.759065    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:22.759075    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:22.759084    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:22.759090    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:22.759103    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:22.759110    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:22.759116    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:22.759122    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:22.759128    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:22.759139    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:22.759146    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:22.759155    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:22.759162    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:22.759169    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:22.759189    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:22.759199    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:22.759215    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:22.759230    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:24.761221    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 22
	I1003 21:09:24.761248    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:24.761313    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:24.762197    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:24.762220    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:24.762227    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:24.762235    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:24.762242    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:24.762259    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:24.762276    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:24.762286    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:24.762294    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:24.762302    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:24.762310    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:24.762317    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:24.762324    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:24.762337    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:24.762349    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:24.762365    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:24.762376    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:24.762384    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:24.762392    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:26.764433    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 23
	I1003 21:09:26.764448    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:26.764504    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:26.765504    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:26.765547    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:26.765558    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:26.765587    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:26.765605    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:26.765614    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:26.765621    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:26.765628    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:26.765634    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:26.765640    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:26.765646    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:26.765661    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:26.765668    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:26.765676    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:26.765684    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:26.765692    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:26.765703    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:26.765709    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:26.765724    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:28.766686    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 24
	I1003 21:09:28.766698    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:28.766803    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:28.767719    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:28.767783    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:28.767790    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:28.767802    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:28.767808    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:28.767816    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:28.767822    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:28.767838    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:28.767850    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:28.767856    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:28.767864    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:28.767871    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:28.767877    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:28.767890    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:28.767903    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:28.767910    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:28.767918    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:28.767925    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:28.767933    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:30.768616    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 25
	I1003 21:09:30.768631    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:30.768704    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:30.769582    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:30.769645    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:30.769659    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:30.769668    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:30.769676    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:30.769685    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:30.769692    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:30.769697    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:30.769704    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:30.769729    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:30.769749    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:30.769758    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:30.769765    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:30.769772    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:30.769793    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:30.769805    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:30.769814    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:30.769821    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:30.769829    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:32.771132    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 26
	I1003 21:09:32.771149    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:32.771226    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:32.772101    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:32.772147    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:32.772158    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:32.772167    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:32.772176    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:32.772185    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:32.772194    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:32.772209    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:32.772224    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:32.772241    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:32.772257    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:32.772267    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:32.772275    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:32.772282    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:32.772287    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:32.772299    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:32.772314    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:32.772327    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:32.772336    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:34.772422    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 27
	I1003 21:09:34.772437    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:34.772487    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:34.773466    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:34.773481    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:34.773490    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:34.773500    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:34.773512    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:34.773530    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:34.773543    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:34.773557    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:34.773569    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:34.773577    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:34.773585    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:34.773593    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:34.773601    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:34.773611    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:34.773620    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:34.773627    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:34.773633    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:34.773639    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:34.773646    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:36.774765    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 28
	I1003 21:09:36.774781    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:36.774917    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:36.775824    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:36.775881    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:36.775889    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:36.775899    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:36.775905    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:36.775912    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:36.775917    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:36.775942    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:36.775949    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:36.775956    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:36.775964    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:36.775979    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:36.775991    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:36.775999    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:36.776007    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:36.776013    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:36.776021    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:36.776028    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:36.776036    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:38.776054    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 29
	I1003 21:09:38.776067    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:38.776179    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:38.777127    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for 42:9f:b:fd:34:ae in /var/db/dhcpd_leases ...
	I1003 21:09:38.777166    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:09:38.777180    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:09:38.777192    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:09:38.777201    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:09:38.777218    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:09:38.777224    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:09:38.777234    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:09:38.777243    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:09:38.777249    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:09:38.777256    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:09:38.777263    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:09:38.777269    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:09:38.777288    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:09:38.777304    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:09:38.777322    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:09:38.777335    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:09:38.777343    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:09:38.777360    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:09:40.777960    6911 client.go:171] duration metric: took 1m0.967811572s to LocalClient.Create
	I1003 21:09:42.780061    6911 start.go:128] duration metric: took 1m3.024783505s to createHost
	I1003 21:09:42.780078    6911 start.go:83] releasing machines lock for "force-systemd-flag-603000", held for 1m3.024921978s
	W1003 21:09:42.780093    6911 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:9f:b:fd:34:ae
	I1003 21:09:42.780450    6911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:09:42.780486    6911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:09:42.791803    6911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53519
	I1003 21:09:42.792184    6911 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:09:42.792601    6911 main.go:141] libmachine: Using API Version  1
	I1003 21:09:42.792620    6911 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:09:42.792907    6911 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:09:42.793363    6911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:09:42.793400    6911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:09:42.804271    6911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53521
	I1003 21:09:42.804611    6911 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:09:42.804969    6911 main.go:141] libmachine: Using API Version  1
	I1003 21:09:42.804979    6911 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:09:42.805201    6911 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:09:42.805338    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .GetState
	I1003 21:09:42.805467    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:42.805527    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:42.806621    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .DriverName
	I1003 21:09:42.843490    6911 out.go:177] * Deleting "force-systemd-flag-603000" in hyperkit ...
	I1003 21:09:42.885308    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .Remove
	I1003 21:09:42.885447    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:42.885465    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:42.885509    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:42.886587    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:42.886643    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | waiting for graceful shutdown
	I1003 21:09:43.888192    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:43.888324    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:43.889511    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | waiting for graceful shutdown
	I1003 21:09:44.890198    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:44.890334    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:44.891985    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | waiting for graceful shutdown
	I1003 21:09:45.893069    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:45.893113    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:45.893814    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | waiting for graceful shutdown
	I1003 21:09:46.895936    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:46.896017    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:46.896804    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | waiting for graceful shutdown
	I1003 21:09:47.897307    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:09:47.897553    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6950
	I1003 21:09:47.898187    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | sending sigkill
	I1003 21:09:47.898197    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1003 21:09:47.909335    6911 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:9f:b:fd:34:ae
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:9f:b:fd:34:ae
	I1003 21:09:47.909351    6911 start.go:729] Will try again in 5 seconds ...
	I1003 21:09:47.919396    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:09:47 WARN : hyperkit: failed to read stderr: EOF
	I1003 21:09:47.919416    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:09:47 WARN : hyperkit: failed to read stdout: EOF
	I1003 21:09:52.911392    6911 start.go:360] acquireMachinesLock for force-systemd-flag-603000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:10:45.596949    6911 start.go:364] duration metric: took 52.685669571s to acquireMachinesLock for "force-systemd-flag-603000"
	I1003 21:10:45.596972    6911 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:10:45.597037    6911 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:10:45.618513    6911 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:10:45.618603    6911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:10:45.618658    6911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:10:45.630122    6911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53529
	I1003 21:10:45.630560    6911 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:10:45.631049    6911 main.go:141] libmachine: Using API Version  1
	I1003 21:10:45.631061    6911 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:10:45.631301    6911 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:10:45.631421    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .GetMachineName
	I1003 21:10:45.631524    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .DriverName
	I1003 21:10:45.631629    6911 start.go:159] libmachine.API.Create for "force-systemd-flag-603000" (driver="hyperkit")
	I1003 21:10:45.631656    6911 client.go:168] LocalClient.Create starting
	I1003 21:10:45.631682    6911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:10:45.631744    6911 main.go:141] libmachine: Decoding PEM data...
	I1003 21:10:45.631759    6911 main.go:141] libmachine: Parsing certificate...
	I1003 21:10:45.631802    6911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:10:45.631848    6911 main.go:141] libmachine: Decoding PEM data...
	I1003 21:10:45.631856    6911 main.go:141] libmachine: Parsing certificate...
	I1003 21:10:45.631867    6911 main.go:141] libmachine: Running pre-create checks...
	I1003 21:10:45.631872    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .PreCreateCheck
	I1003 21:10:45.631955    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:45.631995    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .GetConfigRaw
	I1003 21:10:45.704270    6911 main.go:141] libmachine: Creating machine...
	I1003 21:10:45.704281    6911 main.go:141] libmachine: (force-systemd-flag-603000) Calling .Create
	I1003 21:10:45.704355    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:45.704529    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:10:45.704355    6997 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:10:45.704592    6911 main.go:141] libmachine: (force-systemd-flag-603000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:10:45.888222    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:10:45.888136    6997 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/id_rsa...
	I1003 21:10:46.009158    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:10:46.009057    6997 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/force-systemd-flag-603000.rawdisk...
	I1003 21:10:46.009176    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Writing magic tar header
	I1003 21:10:46.009194    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Writing SSH key tar header
	I1003 21:10:46.009782    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | I1003 21:10:46.009741    6997 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000 ...
	I1003 21:10:46.375162    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:46.375181    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/hyperkit.pid
	I1003 21:10:46.375197    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Using UUID 5d0062d7-6971-46f6-a23a-a42764ef452e
	I1003 21:10:46.401183    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Generated MAC ca:aa:be:9:c0:84
	I1003 21:10:46.401199    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-603000
	I1003 21:10:46.401232    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5d0062d7-6971-46f6-a23a-a42764ef452e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:10:46.401259    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5d0062d7-6971-46f6-a23a-a42764ef452e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:10:46.401318    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5d0062d7-6971-46f6-a23a-a42764ef452e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/force-systemd-flag-603000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/fo
rce-systemd-flag-603000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-603000"}
	I1003 21:10:46.401355    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5d0062d7-6971-46f6-a23a-a42764ef452e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/force-systemd-flag-603000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/bzimage,/Users/jenkins/minikube-integr
ation/19546-1440/.minikube/machines/force-systemd-flag-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-603000"
	I1003 21:10:46.401376    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:10:46.404576    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 DEBUG: hyperkit: Pid is 6998
	I1003 21:10:46.404988    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 0
	I1003 21:10:46.405008    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:46.405117    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:10:46.406347    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:10:46.406412    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:46.406426    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:46.406438    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:46.406449    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:46.406458    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:46.406478    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:46.406506    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:46.406531    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:46.406548    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:46.406558    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:46.406564    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:46.406573    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:46.406588    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:46.406597    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:46.406609    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:46.406620    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:46.406627    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:46.406632    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:46.415074    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:10:46.423405    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-flag-603000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:10:46.424408    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:10:46.424434    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:10:46.424445    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:10:46.424460    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:10:46.802303    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:10:46.802318    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:10:46.916873    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:10:46.916894    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:10:46.916915    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:10:46.916930    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:10:46.917783    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:10:46.917793    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:10:48.408542    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 1
	I1003 21:10:48.408558    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:48.408614    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:10:48.409538    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:10:48.409596    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:48.409608    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:48.409617    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:48.409626    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:48.409636    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:48.409643    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:48.409654    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:48.409662    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:48.409668    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:48.409676    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:48.409688    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:48.409698    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:48.409707    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:48.409713    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:48.409722    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:48.409730    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:48.409737    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:48.409745    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:50.410061    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 2
	I1003 21:10:50.410076    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:50.410189    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:10:50.411078    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:10:50.411125    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:50.411135    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:50.411143    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:50.411148    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:50.411164    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:50.411179    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:50.411201    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:50.411231    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:50.411242    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:50.411250    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:50.411256    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:50.411264    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:50.411270    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:50.411278    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:50.411286    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:50.411294    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:50.411300    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:50.411307    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:52.297543    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:52 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1003 21:10:52.297661    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:52 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1003 21:10:52.297671    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:52 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1003 21:10:52.317555    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | 2024/10/03 21:10:52 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1003 21:10:52.411573    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 3
	I1003 21:10:52.411600    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:52.411799    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:10:52.413415    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:10:52.413589    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:52.413600    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:52.413611    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:52.413618    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:52.413636    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:52.413644    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:52.413661    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:52.413672    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:52.413693    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:52.413711    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:52.413731    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:52.413742    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:52.413755    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:52.413765    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:52.413775    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:52.413785    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:52.413794    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:52.413803    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:54.414090    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 4
	I1003 21:10:54.414121    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:54.414197    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:10:54.415093    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:10:54.415143    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:54.415158    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:54.415189    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:54.415199    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:54.415209    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:54.415217    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:54.415225    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:54.415233    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:54.415240    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:54.415246    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:54.415254    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:54.415260    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:54.415268    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:54.415279    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:54.415294    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:54.415302    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:54.415309    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:54.415315    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:56.416015    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 5
	I1003 21:10:56.416027    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:56.416089    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:10:56.417040    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:10:56.417096    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:56.417113    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:56.417135    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:56.417147    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:56.417155    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:56.417163    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:56.417170    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:56.417178    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:56.417185    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:56.417192    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:56.417205    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:56.417214    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:56.417221    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:56.417229    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:56.417237    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:56.417246    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:56.417253    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:56.417263    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:10:58.418118    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 6
	I1003 21:10:58.418132    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:10:58.418262    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:10:58.419148    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:10:58.419194    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:10:58.419218    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:10:58.419243    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:10:58.419260    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:10:58.419270    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:10:58.419289    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:10:58.419301    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:10:58.419317    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:10:58.419328    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:10:58.419336    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:10:58.419345    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:10:58.419357    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:10:58.419367    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:10:58.419374    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:10:58.419381    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:10:58.419388    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:10:58.419395    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:10:58.419407    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:00.421399    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 7
	I1003 21:11:00.421412    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:00.421498    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:00.422435    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:00.422480    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:00.422504    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:00.422516    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:00.422523    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:00.422528    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:00.422538    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:00.422546    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:00.422554    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:00.422560    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:00.422566    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:00.422572    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:00.422589    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:00.422596    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:00.422602    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:00.422610    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:00.422616    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:00.422623    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:00.422632    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:02.423429    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 8
	I1003 21:11:02.423443    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:02.423546    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:02.424444    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:02.424531    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:02.424542    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:02.424552    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:02.424561    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:02.424567    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:02.424575    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:02.424591    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:02.424599    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:02.424606    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:02.424612    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:02.424625    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:02.424633    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:02.424641    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:02.424648    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:02.424654    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:02.424662    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:02.424669    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:02.424674    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:04.424673    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 9
	I1003 21:11:04.424690    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:04.424749    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:04.425672    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:04.425707    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:04.425719    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:04.425739    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:04.425764    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:04.425780    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:04.425793    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:04.425800    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:04.425809    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:04.425815    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:04.425821    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:04.425830    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:04.425845    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:04.425857    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:04.425869    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:04.425880    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:04.425897    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:04.425909    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:04.425919    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:06.425874    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 10
	I1003 21:11:06.425890    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:06.425969    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:06.426820    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:06.426892    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:06.426904    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:06.426913    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:06.426918    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:06.426925    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:06.426932    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:06.426938    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:06.426950    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:06.426956    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:06.426983    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:06.426996    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:06.427009    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:06.427019    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:06.427036    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:06.427044    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:06.427052    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:06.427060    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:06.427072    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:08.428302    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 11
	I1003 21:11:08.428317    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:08.428453    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:08.429372    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:08.429419    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:08.429432    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:08.429441    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:08.429448    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:08.429454    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:08.429461    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:08.429468    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:08.429474    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:08.429480    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:08.429492    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:08.429504    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:08.429512    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:08.429518    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:08.429532    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:08.429545    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:08.429556    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:08.429563    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:08.429571    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:10.431574    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 12
	I1003 21:11:10.431589    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:10.431633    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:10.432668    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:10.432722    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:10.432732    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:10.432747    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:10.432756    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:10.432766    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:10.432773    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:10.432787    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:10.432797    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:10.432805    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:10.432812    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:10.432825    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:10.432834    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:10.432843    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:10.432851    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:10.432865    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:10.432878    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:10.432887    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:10.432898    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:12.433907    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 13
	I1003 21:11:12.433922    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:12.433994    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:12.434872    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:12.434929    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:12.434939    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:12.434965    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:12.434974    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:12.434985    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:12.434997    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:12.435008    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:12.435016    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:12.435028    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:12.435034    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:12.435040    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:12.435046    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:12.435061    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:12.435071    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:12.435079    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:12.435088    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:12.435097    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:12.435105    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:14.437132    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 14
	I1003 21:11:14.437148    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:14.437285    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:14.438213    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:14.438263    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:14.438277    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:14.438298    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:14.438310    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:14.438319    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:14.438327    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:14.438334    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:14.438341    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:14.438348    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:14.438354    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:14.438362    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:14.438368    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:14.438376    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:14.438383    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:14.438390    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:14.438408    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:14.438417    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:14.438428    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:16.438418    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 15
	I1003 21:11:16.438433    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:16.438528    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:16.439435    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:16.439471    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:16.439478    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:16.439487    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:16.439493    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:16.439506    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:16.439512    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:16.439518    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:16.439527    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:16.439533    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:16.439538    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:16.439550    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:16.439564    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:16.439572    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:16.439580    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:16.439598    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:16.439610    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:16.439617    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:16.439624    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:18.439865    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 16
	I1003 21:11:18.439881    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:18.439972    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:18.440945    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:18.440996    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:18.441008    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:18.441019    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:18.441026    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:18.441032    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:18.441039    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:18.441053    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:18.441059    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:18.441066    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:18.441075    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:18.441081    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:18.441088    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:18.441105    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:18.441117    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:18.441132    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:18.441153    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:18.441162    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:18.441171    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:20.443169    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 17
	I1003 21:11:20.443188    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:20.443251    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:20.444151    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:20.444191    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:20.444201    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:20.444210    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:20.444217    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:20.444235    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:20.444249    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:20.444260    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:20.444269    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:20.444275    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:20.444283    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:20.444291    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:20.444299    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:20.444305    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:20.444312    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:20.444319    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:20.444324    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:20.444331    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:20.444338    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:22.444713    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 18
	I1003 21:11:22.444727    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:22.444838    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:22.445715    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:22.445775    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:22.445793    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:22.445802    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:22.445810    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:22.445817    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:22.445823    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:22.445829    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:22.445835    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:22.445843    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:22.445864    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:22.445894    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:22.445901    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:22.445933    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:22.445942    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:22.445950    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:22.445958    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:22.445963    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:22.446036    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:24.447293    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 19
	I1003 21:11:24.447310    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:24.447399    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:24.448316    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:24.448367    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:24.448382    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:24.448392    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:24.448398    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:24.448410    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:24.448421    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:24.448436    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:24.448448    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:24.448459    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:24.448468    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:24.448475    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:24.448482    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:24.448493    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:24.448500    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:24.448510    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:24.448518    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:24.448530    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:24.448538    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:26.450534    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 20
	I1003 21:11:26.450547    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:26.450657    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:26.451662    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:26.451679    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:26.451694    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:26.451719    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:26.451727    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:26.451739    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:26.451748    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:26.451756    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:26.451765    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:26.451773    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:26.451779    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:26.451807    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:26.451818    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:26.451826    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:26.451844    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:26.451855    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:26.451871    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:26.451880    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:26.451889    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:28.452023    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 21
	I1003 21:11:28.452038    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:28.452171    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:28.453053    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:28.453094    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:28.453107    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:28.453117    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:28.453122    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:28.453134    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:28.453148    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:28.453155    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:28.453161    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:28.453172    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:28.453180    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:28.453193    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:28.453202    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:28.453209    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:28.453216    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:28.453229    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:28.453236    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:28.453243    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:28.453248    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:30.455265    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 22
	I1003 21:11:30.455286    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:30.455327    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:30.456355    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:30.456424    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:30.456435    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:30.456443    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:30.456449    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:30.456456    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:30.456462    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:30.456468    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:30.456474    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:30.456481    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:30.456487    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:30.456494    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:30.456501    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:30.456509    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:30.456515    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:30.456520    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:30.456527    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:30.456535    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:30.456551    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:32.458546    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 23
	I1003 21:11:32.458557    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:32.458631    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:32.459628    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:32.459683    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:32.459695    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:32.459717    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:32.459730    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:32.459737    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:32.459744    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:32.459750    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:32.459766    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:32.459773    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:32.459780    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:32.459794    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:32.459804    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:32.459812    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:32.459819    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:32.459827    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:32.459841    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:32.459853    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:32.459871    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:34.459854    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 24
	I1003 21:11:34.459870    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:34.460005    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:34.460970    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:34.461024    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:34.461039    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:34.461052    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:34.461059    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:34.461066    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:34.461071    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:34.461084    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:34.461095    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:34.461103    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:34.461110    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:34.461118    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:34.461132    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:34.461142    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:34.461158    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:34.461171    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:34.461187    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:34.461196    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:34.461214    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:36.462365    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 25
	I1003 21:11:36.462379    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:36.462488    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:36.463684    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:36.463723    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:36.463731    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:36.463741    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:36.463746    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:36.463754    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:36.463768    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:36.463797    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:36.463810    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:36.463821    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:36.463828    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:36.463836    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:36.463842    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:36.463854    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:36.463871    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:36.463881    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:36.463887    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:36.463899    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:36.463907    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:38.464172    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 26
	I1003 21:11:38.464186    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:38.464273    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:38.465146    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:38.465211    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:38.465224    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:38.465234    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:38.465247    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:38.465257    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:38.465263    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:38.465269    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:38.465277    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:38.465298    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:38.465310    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:38.465318    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:38.465326    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:38.465332    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:38.465340    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:38.465346    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:38.465352    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:38.465358    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:38.465374    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:40.467365    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 27
	I1003 21:11:40.467379    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:40.467415    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:40.468513    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:40.468580    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:40.468592    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:40.468599    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:40.468606    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:40.468613    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:40.468618    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:40.468626    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:40.468634    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:40.468640    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:40.468646    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:40.468658    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:40.468676    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:40.468694    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:40.468708    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:40.468718    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:40.468727    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:40.468741    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:40.468753    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:42.469518    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 28
	I1003 21:11:42.469958    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:42.469974    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:42.470575    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:42.470607    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:42.470644    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:42.470660    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:42.470675    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:42.470689    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:42.470696    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:42.470702    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:42.470759    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:42.470789    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:42.470809    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:42.470857    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:42.470875    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:42.470887    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:42.470915    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:42.470944    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:42.471168    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:42.471179    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:42.471188    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:44.470819    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Attempt 29
	I1003 21:11:44.470841    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:11:44.470871    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | hyperkit pid from json: 6998
	I1003 21:11:44.471742    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Searching for ca:aa:be:9:c0:84 in /var/db/dhcpd_leases ...
	I1003 21:11:44.471793    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:11:44.471807    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:11:44.471821    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:11:44.471827    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:11:44.471833    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:11:44.471839    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:11:44.471846    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:11:44.471851    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:11:44.471858    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:11:44.471864    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:11:44.471871    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:11:44.471882    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:11:44.471895    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:11:44.471904    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:11:44.471914    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:11:44.471921    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:11:44.471929    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:11:44.471938    6911 main.go:141] libmachine: (force-systemd-flag-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:11:46.474292    6911 client.go:171] duration metric: took 1m0.842769346s to LocalClient.Create
	I1003 21:11:48.475124    6911 start.go:128] duration metric: took 1m2.878253101s to createHost
	I1003 21:11:48.475137    6911 start.go:83] releasing machines lock for "force-systemd-flag-603000", held for 1m2.878349756s
	W1003 21:11:48.475235    6911 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-603000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:aa:be:9:c0:84
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-603000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:aa:be:9:c0:84
	I1003 21:11:48.496575    6911 out.go:201] 
	W1003 21:11:48.517518    6911 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:aa:be:9:c0:84
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:aa:be:9:c0:84
	W1003 21:11:48.517529    6911 out.go:270] * 
	* 
	W1003 21:11:48.518159    6911 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 21:11:48.580523    6911 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-603000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-603000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-603000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (196.96407ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-603000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-603000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-03 21:11:48.887686 -0700 PDT m=+5050.022243519
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-603000 -n force-systemd-flag-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-603000 -n force-systemd-flag-603000: exit status 7 (89.782229ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:11:48.975587    7020 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:11:48.975610    7020 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-603000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-603000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-603000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-603000: (5.284751956s)
--- FAIL: TestForceSystemdFlag (251.88s)

                                                
                                    
x
+
TestForceSystemdEnv (233.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-966000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E1003 21:04:53.987740    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:05:10.914794    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-966000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.280635619s)

                                                
                                                
-- stdout --
	* [force-systemd-env-966000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-966000" primary control-plane node in "force-systemd-env-966000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-966000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 21:04:51.724936    6850 out.go:345] Setting OutFile to fd 1 ...
	I1003 21:04:51.725241    6850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:04:51.725248    6850 out.go:358] Setting ErrFile to fd 2...
	I1003 21:04:51.725252    6850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 21:04:51.725421    6850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 21:04:51.727166    6850 out.go:352] Setting JSON to false
	I1003 21:04:51.754693    6850 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5661,"bootTime":1728009030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 21:04:51.754848    6850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 21:04:51.781862    6850 out.go:177] * [force-systemd-env-966000] minikube v1.34.0 on Darwin 15.0.1
	I1003 21:04:51.830962    6850 notify.go:220] Checking for updates...
	I1003 21:04:51.852833    6850 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 21:04:51.894907    6850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 21:04:51.915895    6850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 21:04:51.936925    6850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 21:04:51.957896    6850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:04:51.978713    6850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1003 21:04:52.000342    6850 config.go:182] Loaded profile config "offline-docker-463000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 21:04:52.000429    6850 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 21:04:52.032002    6850 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 21:04:52.072945    6850 start.go:297] selected driver: hyperkit
	I1003 21:04:52.072956    6850 start.go:901] validating driver "hyperkit" against <nil>
	I1003 21:04:52.072965    6850 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 21:04:52.078045    6850 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:04:52.078180    6850 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 21:04:52.088733    6850 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 21:04:52.095094    6850 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:04:52.095128    6850 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 21:04:52.095161    6850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 21:04:52.095393    6850 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 21:04:52.095423    6850 cni.go:84] Creating CNI manager for ""
	I1003 21:04:52.095463    6850 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 21:04:52.095472    6850 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 21:04:52.095535    6850 start.go:340] cluster config:
	{Name:force-systemd-env-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 21:04:52.095618    6850 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 21:04:52.116845    6850 out.go:177] * Starting "force-systemd-env-966000" primary control-plane node in "force-systemd-env-966000" cluster
	I1003 21:04:52.137995    6850 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 21:04:52.138022    6850 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 21:04:52.138036    6850 cache.go:56] Caching tarball of preloaded images
	I1003 21:04:52.138140    6850 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 21:04:52.138149    6850 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 21:04:52.138217    6850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/force-systemd-env-966000/config.json ...
	I1003 21:04:52.138235    6850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/force-systemd-env-966000/config.json: {Name:mkdab77dc769a5b5419941af40df00c1dbb89c0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 21:04:52.138551    6850 start.go:360] acquireMachinesLock for force-systemd-env-966000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:05:30.984572    6850 start.go:364] duration metric: took 38.846101088s to acquireMachinesLock for "force-systemd-env-966000"
	I1003 21:05:30.984625    6850 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:05:30.984688    6850 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:05:31.006226    6850 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:05:31.006379    6850 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:05:31.006412    6850 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:05:31.017352    6850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53483
	I1003 21:05:31.017704    6850 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:05:31.018131    6850 main.go:141] libmachine: Using API Version  1
	I1003 21:05:31.018141    6850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:05:31.018356    6850 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:05:31.018479    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .GetMachineName
	I1003 21:05:31.018570    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .DriverName
	I1003 21:05:31.018681    6850 start.go:159] libmachine.API.Create for "force-systemd-env-966000" (driver="hyperkit")
	I1003 21:05:31.018705    6850 client.go:168] LocalClient.Create starting
	I1003 21:05:31.018736    6850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:05:31.018797    6850 main.go:141] libmachine: Decoding PEM data...
	I1003 21:05:31.018813    6850 main.go:141] libmachine: Parsing certificate...
	I1003 21:05:31.018872    6850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:05:31.018918    6850 main.go:141] libmachine: Decoding PEM data...
	I1003 21:05:31.018926    6850 main.go:141] libmachine: Parsing certificate...
	I1003 21:05:31.018940    6850 main.go:141] libmachine: Running pre-create checks...
	I1003 21:05:31.018950    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .PreCreateCheck
	I1003 21:05:31.019029    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.019234    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .GetConfigRaw
	I1003 21:05:31.052999    6850 main.go:141] libmachine: Creating machine...
	I1003 21:05:31.053023    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .Create
	I1003 21:05:31.053128    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.053283    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:05:31.053119    6866 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:05:31.053348    6850 main.go:141] libmachine: (force-systemd-env-966000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:05:31.261149    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:05:31.261084    6866 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/id_rsa...
	I1003 21:05:31.331542    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:05:31.331466    6866 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/force-systemd-env-966000.rawdisk...
	I1003 21:05:31.331554    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Writing magic tar header
	I1003 21:05:31.331563    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Writing SSH key tar header
	I1003 21:05:31.332175    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:05:31.332137    6866 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000 ...
	I1003 21:05:31.694178    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.694193    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/hyperkit.pid
	I1003 21:05:31.694202    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Using UUID bbd24cd5-de16-43aa-bfb7-61080c8a3a06
	I1003 21:05:31.720400    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Generated MAC ea:a4:20:ad:c0:9e
	I1003 21:05:31.720416    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-966000
	I1003 21:05:31.720451    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bbd24cd5-de16-43aa-bfb7-61080c8a3a06", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:05:31.720477    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bbd24cd5-de16-43aa-bfb7-61080c8a3a06", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:05:31.720559    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bbd24cd5-de16-43aa-bfb7-61080c8a3a06", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/force-systemd-env-966000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-sys
temd-env-966000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-966000"}
	I1003 21:05:31.720592    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bbd24cd5-de16-43aa-bfb7-61080c8a3a06 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/force-systemd-env-966000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/bzimage,/Users/jenkins/minikube-integration/19
546-1440/.minikube/machines/force-systemd-env-966000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-966000"
	I1003 21:05:31.720612    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:05:31.723491    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 DEBUG: hyperkit: Pid is 6867
	I1003 21:05:31.724415    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 0
	I1003 21:05:31.724443    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:31.724563    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:31.725654    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:31.725763    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:31.725792    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:31.725819    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:31.725837    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:31.725877    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:31.725895    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:31.725913    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:31.725927    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:31.725939    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:31.725949    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:31.725972    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:31.726039    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:31.726109    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:31.726157    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:31.726184    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:31.726201    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:31.726225    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:31.726246    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:31.734618    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:05:31.807426    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:05:31.808235    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:05:31.808273    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:05:31.808283    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:05:31.808291    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:05:32.191996    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:05:32.192012    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:05:32.306590    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:05:32.306614    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:05:32.306647    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:05:32.306664    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:05:32.307487    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:05:32.307498    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:05:33.726105    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 1
	I1003 21:05:33.726120    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:33.726206    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:33.727097    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:33.727155    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:33.727165    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:33.727204    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:33.727224    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:33.727237    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:33.727246    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:33.727259    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:33.727272    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:33.727280    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:33.727288    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:33.727304    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:33.727318    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:33.727336    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:33.727348    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:33.727357    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:33.727365    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:33.727372    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:33.727380    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:35.728529    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 2
	I1003 21:05:35.728558    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:35.728663    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:35.729541    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:35.729605    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:35.729618    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:35.729642    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:35.729654    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:35.729669    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:35.729676    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:35.729684    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:35.729698    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:35.729709    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:35.729717    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:35.729747    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:35.729759    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:35.729767    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:35.729782    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:35.729792    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:35.729802    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:35.729810    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:35.729818    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:37.686821    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1003 21:05:37.686932    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1003 21:05:37.686940    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1003 21:05:37.706938    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:05:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1003 21:05:37.730394    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 3
	I1003 21:05:37.730423    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:37.730571    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:37.732214    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:37.732333    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:37.732346    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:37.732356    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:37.732363    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:37.732378    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:37.732395    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:37.732419    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:37.732436    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:37.732454    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:37.732468    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:37.732481    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:37.732492    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:37.732500    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:37.732527    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:37.732536    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:37.732544    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:37.732558    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:37.732582    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:39.732761    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 4
	I1003 21:05:39.732777    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:39.732845    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:39.733782    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:39.733845    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:39.733857    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:39.733877    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:39.733888    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:39.733897    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:39.733907    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:39.733916    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:39.733952    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:39.733966    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:39.733980    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:39.733994    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:39.734003    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:39.734010    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:39.734018    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:39.734025    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:39.734030    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:39.734042    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:39.734053    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:41.734312    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 5
	I1003 21:05:41.734323    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:41.734404    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:41.735351    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:41.735456    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:41.735467    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:41.735474    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:41.735479    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:41.735496    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:41.735519    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:41.735526    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:41.735533    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:41.735541    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:41.735547    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:41.735553    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:41.735559    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:41.735570    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:41.735577    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:41.735591    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:41.735603    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:41.735617    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:41.735624    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:43.736615    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 6
	I1003 21:05:43.736632    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:43.736676    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:43.737557    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:43.737615    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:43.737628    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:43.737638    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:43.737649    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:43.737660    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:43.737669    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:43.737675    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:43.737681    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:43.737687    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:43.737694    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:43.737699    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:43.737712    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:43.737726    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:43.737734    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:43.737740    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:43.737747    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:43.737754    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:43.737772    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:45.738745    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 7
	I1003 21:05:45.738757    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:45.738844    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:45.739783    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:45.739894    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:45.739905    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:45.739915    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:45.739923    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:45.739940    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:45.739949    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:45.739956    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:45.739964    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:45.739981    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:45.739993    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:45.740016    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:45.740028    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:45.740036    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:45.740044    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:45.740051    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:45.740058    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:45.740068    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:45.740079    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:47.742056    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 8
	I1003 21:05:47.742070    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:47.742202    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:47.743205    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:47.743255    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:47.743262    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:47.743272    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:47.743282    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:47.743291    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:47.743298    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:47.743305    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:47.743311    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:47.743323    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:47.743331    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:47.743343    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:47.743350    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:47.743357    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:47.743363    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:47.743383    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:47.743398    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:47.743406    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:47.743416    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:49.745461    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 9
	I1003 21:05:49.745476    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:49.745575    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:49.746465    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:49.746516    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:49.746528    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:49.746543    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:49.746549    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:49.746578    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:49.746593    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:49.746602    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:49.746609    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:49.746625    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:49.746634    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:49.746640    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:49.746648    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:49.746656    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:49.746664    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:49.746675    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:49.746685    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:49.746694    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:49.746700    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:51.748055    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 10
	I1003 21:05:51.748067    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:51.748168    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:51.749177    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:51.749233    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:51.749241    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:51.749262    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:51.749273    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:51.749280    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:51.749287    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:51.749294    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:51.749300    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:51.749317    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:51.749323    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:51.749333    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:51.749341    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:51.749358    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:51.749370    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:51.749378    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:51.749383    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:51.749390    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:51.749396    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:53.751456    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 11
	I1003 21:05:53.751484    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:53.751637    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:53.752574    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:53.752627    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:53.752636    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:53.752647    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:53.752653    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:53.752665    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:53.752673    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:53.752680    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:53.752697    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:53.752714    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:53.752727    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:53.752735    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:53.752744    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:53.752755    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:53.752763    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:53.752770    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:53.752776    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:53.752782    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:53.752790    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:55.753321    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 12
	I1003 21:05:55.753335    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:55.753439    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:55.754328    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:55.754383    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:55.754393    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:55.754402    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:55.754408    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:55.754430    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:55.754442    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:55.754454    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:55.754462    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:55.754482    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:55.754491    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:55.754498    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:55.754503    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:55.754515    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:55.754538    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:55.754546    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:55.754553    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:55.754558    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:55.754569    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:57.755215    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 13
	I1003 21:05:57.755230    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:57.755287    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:57.756263    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:57.756311    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:57.756323    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:57.756346    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:57.756354    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:57.756369    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:57.756383    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:57.756394    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:57.756403    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:57.756411    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:57.756419    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:57.756435    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:57.756448    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:57.756457    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:57.756464    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:57.756471    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:57.756477    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:57.756484    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:57.756491    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:05:59.756891    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 14
	I1003 21:05:59.756906    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:05:59.756973    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:05:59.758012    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:05:59.758063    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:05:59.758077    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:05:59.758103    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:05:59.758129    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:05:59.758139    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:05:59.758145    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:05:59.758153    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:05:59.758159    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:05:59.758167    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:05:59.758184    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:05:59.758192    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:05:59.758199    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:05:59.758206    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:05:59.758213    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:05:59.758220    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:05:59.758227    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:05:59.758236    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:05:59.758244    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:01.760247    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 15
	I1003 21:06:01.760262    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:01.760337    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:01.761255    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:01.761375    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:01.761383    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:01.761394    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:01.761400    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:01.761412    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:01.761420    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:01.761434    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:01.761440    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:01.761448    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:01.761454    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:01.761460    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:01.761468    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:01.761474    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:01.761486    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:01.761493    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:01.761504    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:01.761511    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:01.761519    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:03.761804    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 16
	I1003 21:06:03.761830    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:03.761964    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:03.762888    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:03.762908    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:03.762925    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:03.762952    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:03.762965    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:03.762979    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:03.762986    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:03.762993    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:03.762998    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:03.763005    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:03.763014    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:03.763021    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:03.763027    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:03.763033    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:03.763041    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:03.763049    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:03.763058    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:03.763070    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:03.763080    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:05.764581    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 17
	I1003 21:06:05.764596    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:05.764655    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:05.765667    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:05.765717    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:05.765737    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:05.765746    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:05.765753    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:05.765761    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:05.765779    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:05.765791    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:05.765799    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:05.765807    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:05.765815    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:05.765823    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:05.765829    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:05.765838    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:05.765847    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:05.765857    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:05.765881    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:05.765888    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:05.765922    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:07.767919    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 18
	I1003 21:06:07.767934    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:07.768016    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:07.768899    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:07.768943    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:07.768955    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:07.768964    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:07.768971    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:07.768977    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:07.768982    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:07.768989    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:07.768995    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:07.769008    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:07.769017    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:07.769026    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:07.769043    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:07.769052    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:07.769059    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:07.769065    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:07.769072    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:07.769079    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:07.769085    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:09.771123    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 19
	I1003 21:06:09.771137    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:09.771213    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:09.772162    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:09.772225    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:09.772235    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:09.772258    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:09.772266    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:09.772273    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:09.772281    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:09.772288    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:09.772294    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:09.772301    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:09.772316    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:09.772327    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:09.772337    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:09.772349    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:09.772358    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:09.772371    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:09.772385    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:09.772400    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:09.772413    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:11.774365    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 20
	I1003 21:06:11.774380    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:11.774433    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:11.775337    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:11.775379    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:11.775389    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:11.775405    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:11.775414    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:11.775420    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:11.775427    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:11.775437    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:11.775447    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:11.775453    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:11.775459    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:11.775471    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:11.775479    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:11.775486    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:11.775492    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:11.775498    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:11.775505    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:11.775512    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:11.775518    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:13.775773    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 21
	I1003 21:06:13.775788    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:13.775921    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:13.776859    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:13.776902    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:13.776926    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:13.776939    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:13.776949    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:13.776957    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:13.776964    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:13.776972    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:13.776978    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:13.776985    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:13.776992    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:13.777000    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:13.777006    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:13.777013    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:13.777020    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:13.777027    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:13.777043    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:13.777055    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:13.777065    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:15.779074    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 22
	I1003 21:06:15.779089    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:15.779202    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:15.780346    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:15.780416    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:15.780427    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:15.780434    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:15.780440    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:15.780446    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:15.780452    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:15.780465    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:15.780473    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:15.780479    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:15.780487    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:15.780493    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:15.780502    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:15.780509    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:15.780522    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:15.780536    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:15.780547    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:15.780553    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:15.780565    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:17.781582    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 23
	I1003 21:06:17.781597    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:17.781706    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:17.782576    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:17.782603    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:17.782610    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:17.782618    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:17.782626    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:17.782633    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:17.782638    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:17.782647    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:17.782663    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:17.782673    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:17.782682    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:17.782688    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:17.782696    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:17.782703    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:17.782710    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:17.782726    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:17.782739    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:17.782747    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:17.782752    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:19.784056    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 24
	I1003 21:06:19.784072    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:19.784195    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:19.785102    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:19.785191    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:19.785202    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:19.785209    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:19.785214    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:19.785231    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:19.785246    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:19.785264    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:19.785275    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:19.785283    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:19.785290    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:19.785306    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:19.785320    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:19.785342    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:19.785357    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:19.785396    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:19.785408    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:19.785444    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:19.785453    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:21.786596    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 25
	I1003 21:06:21.786611    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:21.786679    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:21.787568    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:21.787639    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:21.787663    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:21.787675    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:21.787684    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:21.787693    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:21.787701    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:21.787708    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:21.787715    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:21.787734    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:21.787751    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:21.787759    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:21.787766    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:21.787785    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:21.787795    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:21.787808    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:21.787820    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:21.787840    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:21.787857    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:23.788601    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 26
	I1003 21:06:23.788614    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:23.788731    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:23.789602    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:23.789658    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:23.789682    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:23.789692    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:23.789707    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:23.789713    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:23.789721    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:23.789729    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:23.789736    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:23.789742    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:23.789748    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:23.789755    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:23.789761    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:23.789768    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:23.789774    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:23.789782    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:23.789791    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:23.789797    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:23.789805    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:25.791810    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 27
	I1003 21:06:25.791826    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:25.791913    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:25.792821    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:25.792861    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:25.792868    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:25.792878    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:25.792891    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:25.792899    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:25.792904    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:25.792911    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:25.792919    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:25.792927    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:25.792934    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:25.792941    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:25.792948    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:25.792962    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:25.792980    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:25.792988    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:25.792994    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:25.793010    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:25.793029    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:27.794413    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 28
	I1003 21:06:27.794428    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:27.794534    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:27.795665    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:27.795732    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:27.795742    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:27.795751    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:27.795759    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:27.795766    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:27.795771    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:27.795786    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:27.795798    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:27.795805    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:27.795811    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:27.795817    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:27.795823    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:27.795831    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:27.795838    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:27.795843    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:27.795849    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:27.795866    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:27.795874    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:29.797094    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 29
	I1003 21:06:29.797109    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:29.797186    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:29.798240    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for ea:a4:20:ad:c0:9e in /var/db/dhcpd_leases ...
	I1003 21:06:29.798285    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:06:29.798295    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:06:29.798310    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:06:29.798316    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:06:29.798335    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:06:29.798350    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:06:29.798365    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:06:29.798378    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:06:29.798390    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:06:29.798399    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:06:29.798414    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:06:29.798426    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:06:29.798445    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:06:29.798459    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:06:29.798473    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:06:29.798479    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:06:29.798493    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:06:29.798506    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:06:31.798722    6850 client.go:171] duration metric: took 1m0.780175338s to LocalClient.Create
	I1003 21:06:33.798870    6850 start.go:128] duration metric: took 1m2.814345434s to createHost
	I1003 21:06:33.798885    6850 start.go:83] releasing machines lock for "force-systemd-env-966000", held for 1m2.814462129s
	W1003 21:06:33.798906    6850 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:a4:20:ad:c0:9e
	I1003 21:06:33.799245    6850 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:06:33.799272    6850 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:06:33.811408    6850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53485
	I1003 21:06:33.811768    6850 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:06:33.812265    6850 main.go:141] libmachine: Using API Version  1
	I1003 21:06:33.812294    6850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:06:33.812612    6850 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:06:33.813011    6850 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:06:33.813034    6850 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:06:33.824288    6850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53487
	I1003 21:06:33.824615    6850 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:06:33.824978    6850 main.go:141] libmachine: Using API Version  1
	I1003 21:06:33.824992    6850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:06:33.825232    6850 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:06:33.825362    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .GetState
	I1003 21:06:33.825457    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:33.825530    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:33.826641    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .DriverName
	I1003 21:06:33.848528    6850 out.go:177] * Deleting "force-systemd-env-966000" in hyperkit ...
	I1003 21:06:33.890204    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .Remove
	I1003 21:06:33.890334    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:33.890361    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:33.890409    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:33.891479    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:33.891541    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | waiting for graceful shutdown
	I1003 21:06:34.892306    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:34.892407    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:34.893485    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | waiting for graceful shutdown
	I1003 21:06:35.894074    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:35.894220    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:35.895908    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | waiting for graceful shutdown
	I1003 21:06:36.897529    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:36.897595    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:36.898228    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | waiting for graceful shutdown
	I1003 21:06:37.899844    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:37.899866    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:37.900702    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | waiting for graceful shutdown
	I1003 21:06:38.901099    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:06:38.901243    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6867
	I1003 21:06:38.901889    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | sending sigkill
	I1003 21:06:38.901899    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1003 21:06:38.913264    6850 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:a4:20:ad:c0:9e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:a4:20:ad:c0:9e
	I1003 21:06:38.913287    6850 start.go:729] Will try again in 5 seconds ...
	I1003 21:06:38.924276    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:06:38 WARN : hyperkit: failed to read stdout: EOF
	I1003 21:06:38.924295    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:06:38 WARN : hyperkit: failed to read stderr: EOF
	I1003 21:06:43.915342    6850 start.go:360] acquireMachinesLock for force-systemd-env-966000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 21:07:36.786634    6850 start.go:364] duration metric: took 52.871408227s to acquireMachinesLock for "force-systemd-env-966000"
	I1003 21:07:36.786669    6850 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 21:07:36.786723    6850 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 21:07:36.808003    6850 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 21:07:36.808080    6850 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 21:07:36.808104    6850 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 21:07:36.819352    6850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53491
	I1003 21:07:36.819693    6850 main.go:141] libmachine: () Calling .GetVersion
	I1003 21:07:36.820061    6850 main.go:141] libmachine: Using API Version  1
	I1003 21:07:36.820078    6850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 21:07:36.820316    6850 main.go:141] libmachine: () Calling .GetMachineName
	I1003 21:07:36.820446    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .GetMachineName
	I1003 21:07:36.820546    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .DriverName
	I1003 21:07:36.820672    6850 start.go:159] libmachine.API.Create for "force-systemd-env-966000" (driver="hyperkit")
	I1003 21:07:36.820691    6850 client.go:168] LocalClient.Create starting
	I1003 21:07:36.820716    6850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 21:07:36.820774    6850 main.go:141] libmachine: Decoding PEM data...
	I1003 21:07:36.820785    6850 main.go:141] libmachine: Parsing certificate...
	I1003 21:07:36.820826    6850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 21:07:36.820872    6850 main.go:141] libmachine: Decoding PEM data...
	I1003 21:07:36.820885    6850 main.go:141] libmachine: Parsing certificate...
	I1003 21:07:36.820898    6850 main.go:141] libmachine: Running pre-create checks...
	I1003 21:07:36.820906    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .PreCreateCheck
	I1003 21:07:36.820995    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:36.821028    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .GetConfigRaw
	I1003 21:07:36.829059    6850 main.go:141] libmachine: Creating machine...
	I1003 21:07:36.829067    6850 main.go:141] libmachine: (force-systemd-env-966000) Calling .Create
	I1003 21:07:36.829154    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:36.829337    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:07:36.829152    6897 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 21:07:36.829397    6850 main.go:141] libmachine: (force-systemd-env-966000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 21:07:37.157160    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:07:37.157097    6897 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/id_rsa...
	I1003 21:07:37.282999    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:07:37.282952    6897 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/force-systemd-env-966000.rawdisk...
	I1003 21:07:37.283025    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Writing magic tar header
	I1003 21:07:37.283039    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Writing SSH key tar header
	I1003 21:07:37.283453    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | I1003 21:07:37.283415    6897 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000 ...
	I1003 21:07:37.648530    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:37.648551    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/hyperkit.pid
	I1003 21:07:37.648565    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Using UUID f456beb0-82c4-45d5-a1cc-aada893c9cf8
	I1003 21:07:37.673599    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Generated MAC a2:5f:22:c3:b1:ba
	I1003 21:07:37.673626    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-966000
	I1003 21:07:37.673665    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f456beb0-82c4-45d5-a1cc-aada893c9cf8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000ac690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:07:37.673699    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f456beb0-82c4-45d5-a1cc-aada893c9cf8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000ac690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 21:07:37.673759    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f456beb0-82c4-45d5-a1cc-aada893c9cf8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/force-systemd-env-966000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-sys
temd-env-966000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-966000"}
	I1003 21:07:37.673802    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f456beb0-82c4-45d5-a1cc-aada893c9cf8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/force-systemd-env-966000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/bzimage,/Users/jenkins/minikube-integration/19
546-1440/.minikube/machines/force-systemd-env-966000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-966000"
	I1003 21:07:37.673816    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 21:07:37.676617    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 DEBUG: hyperkit: Pid is 6907
	I1003 21:07:37.677697    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 0
	I1003 21:07:37.677710    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:37.677745    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:37.678790    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:37.678885    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:37.678915    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:37.678931    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:37.678951    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:37.678965    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:37.678992    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:37.679015    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:37.679031    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:37.679046    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:37.679062    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:37.679076    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:37.679090    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:37.679106    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:37.679131    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:37.679150    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:37.679158    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:37.679170    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:37.679183    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:37.687091    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 21:07:37.695358    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/force-systemd-env-966000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 21:07:37.696353    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:07:37.696377    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:07:37.696392    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:07:37.696409    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:07:38.073663    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 21:07:38.073683    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 21:07:38.188293    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 21:07:38.188314    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 21:07:38.188328    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 21:07:38.188348    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 21:07:38.189188    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 21:07:38.189199    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 21:07:39.681079    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 1
	I1003 21:07:39.681096    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:39.681144    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:39.682045    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:39.682104    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:39.682120    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:39.682136    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:39.682149    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:39.682160    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:39.682168    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:39.682174    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:39.682181    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:39.682190    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:39.682196    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:39.682202    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:39.682220    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:39.682231    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:39.682239    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:39.682248    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:39.682258    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:39.682266    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:39.682274    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:41.682353    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 2
	I1003 21:07:41.682366    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:41.682460    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:41.683414    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:41.683502    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:41.683515    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:41.683526    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:41.683536    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:41.683544    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:41.683553    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:41.683559    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:41.683567    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:41.683608    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:41.683622    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:41.683631    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:41.683637    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:41.683645    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:41.683654    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:41.683665    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:41.683686    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:41.683695    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:41.683702    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:43.543880    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 21:07:43.543999    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 21:07:43.544008    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 21:07:43.563343    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | 2024/10/03 21:07:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 21:07:43.685773    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 3
	I1003 21:07:43.685797    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:43.685970    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:43.687635    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:43.687801    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:43.687812    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:43.687826    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:43.687842    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:43.687854    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:43.687862    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:43.687885    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:43.687902    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:43.687912    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:43.687923    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:43.687950    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:43.687967    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:43.687977    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:43.687985    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:43.688002    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:43.688019    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:43.688046    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:43.688063    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:45.688256    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 4
	I1003 21:07:45.688276    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:45.688383    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:45.689279    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:45.689314    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:45.689321    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:45.689331    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:45.689336    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:45.689344    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:45.689351    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:45.689358    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:45.689364    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:45.689372    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:45.689380    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:45.689395    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:45.689409    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:45.689420    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:45.689428    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:45.689434    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:45.689442    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:45.689450    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:45.689458    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:47.691601    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 5
	I1003 21:07:47.691614    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:47.691678    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:47.692602    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:47.692652    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:47.692672    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:47.692683    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:47.692710    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:47.692719    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:47.692727    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:47.692734    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:47.692740    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:47.692749    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:47.692755    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:47.692761    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:47.692767    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:47.692776    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:47.692783    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:47.692790    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:47.692796    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:47.692802    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:47.692808    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:49.694835    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 6
	I1003 21:07:49.694848    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:49.694916    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:49.695951    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:49.696000    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:49.696011    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:49.696019    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:49.696025    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:49.696042    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:49.696060    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:49.696076    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:49.696087    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:49.696095    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:49.696103    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:49.696110    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:49.696116    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:49.696124    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:49.696132    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:49.696139    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:49.696148    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:49.696155    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:49.696164    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:51.697576    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 7
	I1003 21:07:51.697591    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:51.697694    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:51.698630    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:51.698676    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:51.698687    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:51.698698    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:51.698706    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:51.698716    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:51.698735    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:51.698743    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:51.698751    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:51.698758    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:51.698766    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:51.698773    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:51.698781    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:51.698789    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:51.698797    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:51.698803    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:51.698811    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:51.698817    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:51.698825    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:53.699607    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 8
	I1003 21:07:53.699622    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:53.699726    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:53.700705    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:53.700754    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:53.700768    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:53.700777    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:53.700783    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:53.700809    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:53.700825    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:53.700836    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:53.700846    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:53.700856    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:53.700863    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:53.700872    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:53.700879    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:53.700887    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:53.700893    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:53.700901    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:53.700917    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:53.700926    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:53.700935    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:55.702987    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 9
	I1003 21:07:55.703006    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:55.703078    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:55.704014    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:55.704044    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:55.704056    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:55.704078    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:55.704089    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:55.704104    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:55.704111    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:55.704118    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:55.704126    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:55.704132    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:55.704139    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:55.704156    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:55.704167    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:55.704174    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:55.704182    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:55.704189    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:55.704195    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:55.704202    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:55.704219    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:57.704858    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 10
	I1003 21:07:57.704871    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:57.704947    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:57.705800    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:57.705852    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:57.705866    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:57.705895    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:57.705914    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:57.705922    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:57.705930    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:57.705937    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:57.705944    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:57.705958    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:57.705969    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:57.705977    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:57.705985    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:57.705998    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:57.706008    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:57.706017    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:57.706030    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:57.706043    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:57.706054    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:07:59.707041    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 11
	I1003 21:07:59.707080    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:07:59.707167    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:07:59.708256    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:07:59.708290    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:07:59.708297    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:07:59.708308    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:07:59.708314    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:07:59.708321    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:07:59.708329    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:07:59.708336    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:07:59.708341    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:07:59.708350    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:07:59.708357    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:07:59.708370    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:07:59.708384    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:07:59.708393    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:07:59.708400    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:07:59.708431    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:07:59.708443    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:07:59.708452    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:07:59.708461    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:01.709532    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 12
	I1003 21:08:01.709547    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:01.709604    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:01.710496    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:01.710555    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:01.710569    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:01.710582    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:01.710601    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:01.710613    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:01.710623    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:01.710631    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:01.710636    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:01.710642    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:01.710666    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:01.710677    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:01.710685    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:01.710691    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:01.710703    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:01.710717    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:01.710726    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:01.710734    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:01.710743    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:03.712693    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 13
	I1003 21:08:03.712708    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:03.712833    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:03.713765    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:03.713829    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:03.713839    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:03.713848    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:03.713867    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:03.713884    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:03.713897    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:03.713915    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:03.713928    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:03.713983    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:03.713991    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:03.713999    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:03.714006    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:03.714016    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:03.714024    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:03.714031    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:03.714039    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:03.714052    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:03.714062    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:05.715432    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 14
	I1003 21:08:05.715448    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:05.715546    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:05.716425    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:05.716483    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:05.716501    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:05.716513    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:05.716519    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:05.716525    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:05.716547    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:05.716558    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:05.716565    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:05.716579    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:05.716592    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:05.716603    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:05.716611    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:05.716618    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:05.716626    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:05.716643    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:05.716654    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:05.716665    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:05.716674    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:07.718651    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 15
	I1003 21:08:07.718667    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:07.718832    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:07.719731    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:07.719789    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:07.719808    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:07.719821    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:07.719831    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:07.719838    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:07.719850    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:07.719856    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:07.719863    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:07.719869    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:07.719875    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:07.719883    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:07.719890    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:07.719899    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:07.719906    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:07.719911    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:07.719918    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:07.719925    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:07.719933    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:09.721977    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 16
	I1003 21:08:09.721989    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:09.722093    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:09.723004    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:09.723058    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:09.723087    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:09.723098    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:09.723117    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:09.723131    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:09.723139    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:09.723144    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:09.723151    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:09.723157    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:09.723162    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:09.723186    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:09.723198    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:09.723205    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:09.723212    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:09.723218    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:09.723226    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:09.723232    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:09.723240    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:11.725208    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 17
	I1003 21:08:11.725222    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:11.725311    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:11.726448    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:11.726511    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:11.726521    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:11.726537    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:11.726543    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:11.726556    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:11.726565    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:11.726571    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:11.726580    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:11.726591    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:11.726600    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:11.726607    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:11.726616    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:11.726624    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:11.726629    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:11.726639    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:11.726647    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:11.726665    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:11.726677    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:13.728859    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 18
	I1003 21:08:13.728883    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:13.728945    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:13.729834    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:13.729891    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:13.729902    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:13.729910    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:13.729917    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:13.729939    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:13.729951    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:13.729959    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:13.729965    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:13.729972    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:13.729980    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:13.729987    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:13.729993    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:13.730008    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:13.730021    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:13.730029    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:13.730036    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:13.730049    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:13.730059    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:15.730146    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 19
	I1003 21:08:15.730170    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:15.730345    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:15.731288    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:15.731338    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:15.731349    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:15.731375    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:15.731387    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:15.731404    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:15.731416    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:15.731425    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:15.731431    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:15.731438    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:15.731446    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:15.731453    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:15.731460    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:15.731467    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:15.731474    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:15.731481    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:15.731488    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:15.731503    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:15.731516    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:17.731931    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 20
	I1003 21:08:17.731946    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:17.732051    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:17.732994    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:17.733056    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:17.733097    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:17.733111    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:17.733129    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:17.733142    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:17.733151    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:17.733159    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:17.733174    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:17.733182    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:17.733189    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:17.733196    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:17.733205    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:17.733213    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:17.733220    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:17.733229    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:17.733236    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:17.733243    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:17.733251    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:19.734111    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 21
	I1003 21:08:19.734126    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:19.734257    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:19.735325    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:19.735369    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:19.735379    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:19.735390    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:19.735397    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:19.735404    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:19.735410    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:19.735424    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:19.735433    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:19.735445    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:19.735462    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:19.735471    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:19.735477    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:19.735484    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:19.735492    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:19.735499    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:19.735512    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:19.735527    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:19.735540    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:21.735914    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 22
	I1003 21:08:21.735925    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:21.736052    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:21.736956    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:21.737003    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:21.737013    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:21.737023    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:21.737030    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:21.737036    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:21.737042    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:21.737058    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:21.737081    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:21.737104    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:21.737138    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:21.737156    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:21.737170    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:21.737187    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:21.737198    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:21.737212    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:21.737220    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:21.737228    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:21.737234    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:23.737472    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 23
	I1003 21:08:23.737487    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:23.737522    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:23.738447    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:23.738503    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:23.738511    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:23.738520    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:23.738525    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:23.738531    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:23.738538    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:23.738544    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:23.738553    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:23.738559    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:23.738565    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:23.738579    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:23.738592    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:23.738608    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:23.738616    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:23.738623    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:23.738631    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:23.738644    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:23.738652    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:25.740714    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 24
	I1003 21:08:25.740729    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:25.740759    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:25.741782    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:25.741837    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:25.741849    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:25.741856    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:25.741863    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:25.741878    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:25.741886    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:25.741902    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:25.741911    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:25.741918    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:25.741924    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:25.741936    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:25.741943    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:25.741955    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:25.741969    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:25.741977    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:25.741985    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:25.742002    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:25.742013    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:27.742026    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 25
	I1003 21:08:27.742042    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:27.742110    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:27.742998    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:27.743040    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:27.743049    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:27.743062    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:27.743071    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:27.743078    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:27.743084    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:27.743091    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:27.743098    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:27.743105    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:27.743112    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:27.743118    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:27.743124    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:27.743133    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:27.743147    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:27.743161    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:27.743178    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:27.743190    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:27.743198    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:29.745190    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 26
	I1003 21:08:29.745205    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:29.745260    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:29.746144    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:29.746207    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:29.746218    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:29.746235    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:29.746242    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:29.746259    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:29.746268    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:29.746276    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:29.746284    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:29.746291    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:29.746308    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:29.746324    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:29.746336    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:29.746345    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:29.746352    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:29.746361    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:29.746367    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:29.746374    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:29.746379    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:31.747213    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 27
	I1003 21:08:31.747225    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:31.747295    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:31.748170    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:31.748243    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:31.748255    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:31.748264    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:31.748277    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:31.748308    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:31.748316    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:31.748327    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:31.748335    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:31.748343    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:31.748350    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:31.748357    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:31.748364    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:31.748371    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:31.748377    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:31.748383    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:31.748398    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:31.748413    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:31.748422    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:33.748749    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 28
	I1003 21:08:33.748763    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:33.748839    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:33.749708    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:33.749751    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:33.749762    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:33.749771    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:33.749777    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:33.749783    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:33.749799    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:33.749811    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:33.749820    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:33.749828    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:33.749836    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:33.749854    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:33.749866    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:33.749875    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:33.749883    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:33.749890    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:33.749897    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:33.749904    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:33.749910    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:35.751114    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Attempt 29
	I1003 21:08:35.751128    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 21:08:35.751276    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | hyperkit pid from json: 6907
	I1003 21:08:35.752157    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Searching for a2:5f:22:c3:b1:ba in /var/db/dhcpd_leases ...
	I1003 21:08:35.752201    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I1003 21:08:35.752212    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:25:8b:a2:94:69 ID:1,c2:25:8b:a2:94:69 Lease:0x66ff76ee}
	I1003 21:08:35.752231    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:66:54:58:be:9c:c9 ID:1,66:54:58:be:9c:c9 Lease:0x66ff762a}
	I1003 21:08:35.752242    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:96:d4:36:7:25:b2 ID:1,96:d4:36:7:25:b2 Lease:0x66ff6775}
	I1003 21:08:35.752252    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:a:4f:5c:f9:cc ID:1,22:a:4f:5c:f9:cc Lease:0x66ff66c0}
	I1003 21:08:35.752259    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:c1:90:be:c8:68 ID:1,9a:c1:90:be:c8:68 Lease:0x66ff7526}
	I1003 21:08:35.752274    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:53:50:32:c9:7b ID:1,32:53:50:32:c9:7b Lease:0x66ff74ec}
	I1003 21:08:35.752285    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:8a:35:ad:4f:8d:d5 ID:1,8a:35:ad:4f:8d:d5 Lease:0x66ff7278}
	I1003 21:08:35.752295    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fa:32:e8:cd:88:b ID:1,fa:32:e8:cd:88:b Lease:0x66ff7251}
	I1003 21:08:35.752301    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:c6:64:7d:d2:ae:55 ID:1,c6:64:7d:d2:ae:55 Lease:0x66ff71f4}
	I1003 21:08:35.752309    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e6:4e:8d:1c:4c:cc ID:1,e6:4e:8d:1c:4c:cc Lease:0x66ff71c3}
	I1003 21:08:35.752317    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:76:8f:6:29:79:b5 ID:1,76:8f:6:29:79:b5 Lease:0x66ff7144}
	I1003 21:08:35.752325    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 21:08:35.752332    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff70d8}
	I1003 21:08:35.752339    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 21:08:35.752345    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 21:08:35.752353    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 21:08:35.752365    6850 main.go:141] libmachine: (force-systemd-env-966000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 21:08:37.753029    6850 client.go:171] duration metric: took 1m0.93249877s to LocalClient.Create
	I1003 21:08:39.755216    6850 start.go:128] duration metric: took 1m2.968656151s to createHost
	I1003 21:08:39.755229    6850 start.go:83] releasing machines lock for "force-systemd-env-966000", held for 1m2.968750995s
	W1003 21:08:39.755347    6850 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-966000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:5f:22:c3:b1:ba
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-966000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:5f:22:c3:b1:ba
	I1003 21:08:39.818560    6850 out.go:201] 
	W1003 21:08:39.839660    6850 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:5f:22:c3:b1:ba
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:5f:22:c3:b1:ba
	W1003 21:08:39.839674    6850 out.go:270] * 
	* 
	W1003 21:08:39.840410    6850 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 21:08:39.902652    6850 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-966000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-966000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-966000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (190.983794ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-966000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-966000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-03 21:08:40.204588 -0700 PDT m=+4861.338621691
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-966000 -n force-systemd-env-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-966000 -n force-systemd-env-966000: exit status 7 (89.534972ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:08:40.292209    6941 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:08:40.292236    6941 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-966000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-966000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-966000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-966000: (5.280873785s)
--- FAIL: TestForceSystemdEnv (233.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (118.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-214000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E1003 20:13:01.962602    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-214000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m55.501379333s)

                                                
                                                
-- stdout --
	* [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	* 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 start -p ha-214000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.309186497s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-042000 ssh findmnt                                                                                       | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount          | -p functional-042000                                                                                                | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port867681939/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh findmnt                                                                                       | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh -- ls                                                                                         | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh sudo                                                                                          | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-042000                                                                                                | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount1  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount          | -p functional-042000                                                                                                | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount2  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount          | -p functional-042000                                                                                                | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount3  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh findmnt                                                                                       | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh findmnt                                                                                       | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh findmnt                                                                                       | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh findmnt                                                                                       | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount          | -p functional-042000                                                                                                | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| update-context | functional-042000                                                                                                   | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | update-context                                                                                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                              |                   |         |         |                     |                     |
	| update-context | functional-042000                                                                                                   | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | update-context                                                                                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                              |                   |         |         |                     |                     |
	| update-context | functional-042000                                                                                                   | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | update-context                                                                                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                              |                   |         |         |                     |                     |
	| image          | functional-042000                                                                                                   | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | image ls --format short                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| image          | functional-042000                                                                                                   | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | image ls --format yaml                                                                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| ssh            | functional-042000 ssh pgrep                                                                                         | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | buildkitd                                                                                                           |                   |         |         |                     |                     |
	| image          | functional-042000 image build -t                                                                                    | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | localhost/my-image:functional-042000                                                                                |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                    |                   |         |         |                     |                     |
	| image          | functional-042000                                                                                                   | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | image ls --format json                                                                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| image          | functional-042000                                                                                                   | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	|                | image ls --format table                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| image          | functional-042000 image ls                                                                                          | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	| delete         | -p functional-042000                                                                                                | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	| start          | -p ha-214000 --wait=true                                                                                            | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|                | --memory=2200 --ha                                                                                                  |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|                | --driver=hyperkit                                                                                                   |                   |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.009449473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.009543254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a4df5305516c4c578f0609adbc31ec6ae6e759a660ba20c29d2130d9aacc6762/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.115790684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.115840124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.115851838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.115919907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609646837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609761053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609773733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9d4a054cd6084       c69fa2e9cbf5f                                                                                       57 seconds ago       Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                       57 seconds ago       Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                       57 seconds ago       Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166            About a minute ago   Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                       About a minute ago   Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4   About a minute ago   Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                       About a minute ago   Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                       About a minute ago   Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                       About a minute ago   Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                       About a minute ago   Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:13:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:12:36 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:12:36 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:12:36 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:12:36 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         82s
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      77s
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x3 over 88s)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x3 over 88s)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x2 over 88s)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           78s                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                59s                kubelet          Node ha-214000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006818] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 1"}
	{"level":"info","ts":"2024-10-04T03:12:00.474092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:13:26 up 1 min,  0 users,  load average: 0.52, 0.25, 0.10
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:12:13.509041       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1004 03:12:13.510346       1 main.go:139] hostIP = 192.169.0.5
	podIP = 192.169.0.5
	I1004 03:12:13.602819       1 main.go:148] setting mtu 1500 for CNI 
	I1004 03:12:13.602860       1 main.go:178] kindnetd IP family: "ipv4"
	I1004 03:12:13.602878       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1004 03:12:13.908848       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I1004 03:12:23.917759       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:12:23.917920       1 main.go:299] handling current node
	I1004 03:12:33.912972       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:12:33.913084       1 main.go:299] handling current node
	I1004 03:12:43.909183       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:12:43.909351       1 main.go:299] handling current node
	I1004 03:12:53.913039       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:12:53.913400       1 main.go:299] handling current node
	I1004 03:13:03.915905       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:13:03.915944       1 main.go:299] handling current node
	I1004 03:13:13.911520       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:13:13.911599       1 main.go:299] handling current node
	I1004 03:13:23.915598       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:13:23.915862       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.796190       1 policy_source.go:224] refreshing policies
	I1004 03:12:01.796283       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:12:01.796352       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:12:01.796622       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:12:01.796715       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:12:08.697837       1 shared_informer.go:320] Caches are synced for PV protection
	I1004 03:12:08.697889       1 shared_informer.go:320] Caches are synced for persistent volume
	I1004 03:12:08.699027       1 shared_informer.go:320] Caches are synced for attach detach
	I1004 03:12:08.709250       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:12:08.713549       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:12:09.006126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:12:09.131731       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:12:09.148978       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:12:09.149208       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:12:09.625553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="382.797202ms"
	I1004 03:12:09.652060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="26.447748ms"
	I1004 03:12:09.652146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.664µs"
	I1004 03:12:27.637851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:12:27.645306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:12:27.652024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.708µs"
	I1004 03:12:27.653069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="120.983µs"
	I1004 03:12:27.664030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.581µs"
	I1004 03:12:27.672509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.583µs"
	I1004 03:12:28.469429       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1004 03:12:29.593099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.212µs"
	I1004 03:12:29.614886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.4063ms"
	I1004 03:12:29.615539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.139µs"
	I1004 03:12:29.623230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.7267ms"
	I1004 03:12:29.623981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.813µs"
	I1004 03:12:36.259697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:12:09 ha-214000 kubelet[2148]: I1004 03:12:09.582529    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-xtables-lock\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:12:09 ha-214000 kubelet[2148]: I1004 03:12:09.582602    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-xtables-lock\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:12:09 ha-214000 kubelet[2148]: I1004 03:12:09.582654    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-lib-modules\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:12:09 ha-214000 kubelet[2148]: I1004 03:12:09.582731    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zq67\" (UniqueName: \"kubernetes.io/projected/081b3b91-47cc-4e37-a6b8-4de271f93c97-kube-api-access-4zq67\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:12:09 ha-214000 kubelet[2148]: I1004 03:12:09.582805    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-lib-modules\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:12:09 ha-214000 kubelet[2148]: I1004 03:12:09.582860    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thn27\" (UniqueName: \"kubernetes.io/projected/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-kube-api-access-thn27\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:12:09 ha-214000 kubelet[2148]: I1004 03:12:09.688862    2148 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 04 03:12:10 ha-214000 kubelet[2148]: I1004 03:12:10.487308    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-grxks" podStartSLOduration=1.487293591 podStartE2EDuration="1.487293591s" podCreationTimestamp="2024-10-04 03:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 03:12:10.478356259 +0000 UTC m=+5.210173064" watchObservedRunningTime="2024-10-04 03:12:10.487293591 +0000 UTC m=+5.219110401"
	Oct 04 03:12:15 ha-214000 kubelet[2148]: I1004 03:12:15.385986    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-flq8x" podStartSLOduration=3.255653911 podStartE2EDuration="6.385970081s" podCreationTimestamp="2024-10-04 03:12:09 +0000 UTC" firstStartedPulling="2024-10-04 03:12:10.13774589 +0000 UTC m=+4.869562698" lastFinishedPulling="2024-10-04 03:12:13.268062067 +0000 UTC m=+7.999878868" observedRunningTime="2024-10-04 03:12:13.497565644 +0000 UTC m=+8.229382454" watchObservedRunningTime="2024-10-04 03:12:15.385970081 +0000 UTC m=+10.117786887"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: I1004 03:12:27.628446    2148 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: W1004 03:12:27.657146    2148 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ha-214000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ha-214000' and this object
	Oct 04 03:12:27 ha-214000 kubelet[2148]: E1004 03:12:27.657280    2148 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ha-214000\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-214000' and this object" logger="UnhandledError"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: I1004 03:12:27.813960    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7150978-eff2-421e-9c54-ba99230bd0e7-config-volume\") pod \"coredns-7c65d6cfc9-slrtf\" (UID: \"a7150978-eff2-421e-9c54-ba99230bd0e7\") " pod="kube-system/coredns-7c65d6cfc9-slrtf"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: I1004 03:12:27.814054    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f5e9cfaf-fc93-45bd-9061-cf51f9eef735-tmp\") pod \"storage-provisioner\" (UID: \"f5e9cfaf-fc93-45bd-9061-cf51f9eef735\") " pod="kube-system/storage-provisioner"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: I1004 03:12:27.814178    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbhjl\" (UniqueName: \"kubernetes.io/projected/a7150978-eff2-421e-9c54-ba99230bd0e7-kube-api-access-fbhjl\") pod \"coredns-7c65d6cfc9-slrtf\" (UID: \"a7150978-eff2-421e-9c54-ba99230bd0e7\") " pod="kube-system/coredns-7c65d6cfc9-slrtf"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: I1004 03:12:27.814209    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzr8d\" (UniqueName: \"kubernetes.io/projected/f5e9cfaf-fc93-45bd-9061-cf51f9eef735-kube-api-access-jzr8d\") pod \"storage-provisioner\" (UID: \"f5e9cfaf-fc93-45bd-9061-cf51f9eef735\") " pod="kube-system/storage-provisioner"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: I1004 03:12:27.814229    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcd09c79-a74c-4ea6-8ed9-ddb25e4d2e8a-config-volume\") pod \"coredns-7c65d6cfc9-l4wpg\" (UID: \"fcd09c79-a74c-4ea6-8ed9-ddb25e4d2e8a\") " pod="kube-system/coredns-7c65d6cfc9-l4wpg"
	Oct 04 03:12:27 ha-214000 kubelet[2148]: I1004 03:12:27.814356    2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqdf4\" (UniqueName: \"kubernetes.io/projected/fcd09c79-a74c-4ea6-8ed9-ddb25e4d2e8a-kube-api-access-pqdf4\") pod \"coredns-7c65d6cfc9-l4wpg\" (UID: \"fcd09c79-a74c-4ea6-8ed9-ddb25e4d2e8a\") " pod="kube-system/coredns-7c65d6cfc9-l4wpg"
	Oct 04 03:12:28 ha-214000 kubelet[2148]: I1004 03:12:28.577666    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=19.577652661 podStartE2EDuration="19.577652661s" podCreationTimestamp="2024-10-04 03:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 03:12:28.577577741 +0000 UTC m=+23.309394552" watchObservedRunningTime="2024-10-04 03:12:28.577652661 +0000 UTC m=+23.309469465"
	Oct 04 03:12:29 ha-214000 kubelet[2148]: I1004 03:12:29.607339    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-slrtf" podStartSLOduration=20.607326271 podStartE2EDuration="20.607326271s" podCreationTimestamp="2024-10-04 03:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 03:12:29.594355774 +0000 UTC m=+24.326172579" watchObservedRunningTime="2024-10-04 03:12:29.607326271 +0000 UTC m=+24.339143082"
	Oct 04 03:13:05 ha-214000 kubelet[2148]: E1004 03:13:05.386064    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:13:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:13:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:13:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:13:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [792bd20fa10c] <==
	I1004 03:12:28.163172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 03:12:28.169508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 03:12:28.169598       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 03:12:28.174405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 03:12:28.174568       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a!
	I1004 03:12:28.175244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9362f6e1-9a8a-4609-b2ed-601ec5b3e435", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a became leader
	I1004 03:12:28.280846       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (118.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (686.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- rollout status deployment/busybox
E1003 20:13:29.676740    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:10.848715    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:10.856576    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:10.868321    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:10.889796    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:10.931556    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:11.014981    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:11.176377    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:11.499034    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:12.140513    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:13.422423    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:15.984756    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:21.106519    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:31.349985    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:15:51.842883    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:16:32.819606    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:17:54.742998    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:18:01.990880    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:20:10.877262    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:20:38.586008    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:23:01.994308    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- rollout status deployment/busybox: exit status 1 (10m7.983590565s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:23:36.007654    2003 retry.go:31] will retry after 628.296322ms: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:23:36.784367    2003 retry.go:31] will retry after 830.791254ms: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:23:37.764570    2003 retry.go:31] will retry after 2.005157723s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:23:39.918680    2003 retry.go:31] will retry after 4.113987195s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:23:44.181568    2003 retry.go:31] will retry after 5.451111823s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:23:49.797935    2003 retry.go:31] will retry after 8.901845516s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:23:58.850557    2003 retry.go:31] will retry after 7.19085534s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:24:06.207156    2003 retry.go:31] will retry after 22.999380427s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E1003 20:24:25.072756    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1003 20:24:29.358182    2003 retry.go:31] will retry after 20.292318876s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- nslookup kubernetes.io: exit status 1 (130.238873ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-9tvdj does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7dff88458-9tvdj could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-m7hqf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- nslookup kubernetes.io: exit status 1 (128.98594ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-z5g4l does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7dff88458-z5g4l could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- nslookup kubernetes.default: exit status 1 (129.736801ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-9tvdj does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7dff88458-9tvdj could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-m7hqf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- nslookup kubernetes.default: exit status 1 (130.772815ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-z5g4l does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7dff88458-z5g4l could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (130.312497ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-9tvdj does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7dff88458-9tvdj could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-m7hqf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (130.05791ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-z5g4l does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7dff88458-z5g4l could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.257879692s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-042000 image ls           | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	| delete  | -p functional-042000                 | functional-042000 | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT | 03 Oct 24 20:11 PDT |
	| start   | -p ha-214000 --wait=true             | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:11 PDT |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=hyperkit                    |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- apply -f             | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:13 PDT | 03 Oct 24 20:13 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- rollout status       | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:13 PDT |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000         | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         12 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         12 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         12 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              12 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         12 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     12 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         12 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         12 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         12 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         12 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:24:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x3 over 12m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	
	
	==> kernel <==
	 03:24:53 up 13 min,  0 users,  load average: 0.04, 0.18, 0.16
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:22:43.501626       1 main.go:299] handling current node
	I1004 03:22:53.505023       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:22:53.505044       1 main.go:299] handling current node
	I1004 03:23:03.499418       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:03.499672       1 main.go:299] handling current node
	I1004 03:23:13.496364       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:13.496468       1 main.go:299] handling current node
	I1004 03:23:23.496482       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:23.496678       1 main.go:299] handling current node
	I1004 03:23:33.500033       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:33.500062       1 main.go:299] handling current node
	I1004 03:23:43.506196       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:43.506328       1 main.go:299] handling current node
	I1004 03:23:53.496551       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:53.496591       1 main.go:299] handling current node
	I1004 03:24:03.505646       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:03.505701       1 main.go:299] handling current node
	I1004 03:24:13.496991       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:13.497131       1 main.go:299] handling current node
	I1004 03:24:23.497292       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:23.497564       1 main.go:299] handling current node
	I1004 03:24:33.496722       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:33.496843       1 main.go:299] handling current node
	I1004 03:24:43.498582       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:43.499114       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.796622       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:12:01.796715       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:12:09.149208       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:12:09.625553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="382.797202ms"
	I1004 03:12:09.652060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="26.447748ms"
	I1004 03:12:09.652146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.664µs"
	I1004 03:12:27.637851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:12:27.645306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:12:27.652024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.708µs"
	I1004 03:12:27.653069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="120.983µs"
	I1004 03:12:27.664030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.581µs"
	I1004 03:12:27.672509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.583µs"
	I1004 03:12:28.469429       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1004 03:12:29.593099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.212µs"
	I1004 03:12:29.614886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.4063ms"
	I1004 03:12:29.615539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.139µs"
	I1004 03:12:29.623230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.7267ms"
	I1004 03:12:29.623981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.813µs"
	I1004 03:12:36.259697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:13:28.026340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.224984ms"
	I1004 03:13:28.041091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.702731ms"
	I1004 03:13:28.041373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.492µs"
	I1004 03:13:35.064365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.376356ms"
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:20:05 ha-214000 kubelet[2148]: E1004 03:20:05.385800    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:20:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:20:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:20:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:20:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:21:05 ha-214000 kubelet[2148]: E1004 03:21:05.386512    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:22:04 ha-214000 kubelet[2148]: E1004 03:22:04.973240    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:04 ha-214000 kubelet[2148]: E1004 03:23:04.972777    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:04 ha-214000 kubelet[2148]: E1004 03:24:04.972871    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [792bd20fa10c] <==
	I1004 03:12:28.163172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 03:12:28.169508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 03:12:28.169598       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 03:12:28.174405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 03:12:28.174568       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a!
	I1004 03:12:28.175244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9362f6e1-9a8a-4609-b2ed-601ec5b3e435", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a became leader
	I1004 03:12:28.280846       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-9tvdj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9xkpc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9xkpc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  79s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  79s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (686.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-9tvdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (128.957837ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-9tvdj does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7dff88458-9tvdj could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-m7hqf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-m7hqf -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-214000 -- exec busybox-7dff88458-z5g4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (132.753065ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-z5g4l does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7dff88458-z5g4l could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.176811441s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         12 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         12 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         12 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              12 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         12 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     12 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         12 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         12 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         12 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         12 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:24:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x3 over 12m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	
	
	==> kernel <==
	 03:24:56 up 13 min,  0 users,  load average: 0.36, 0.24, 0.18
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:22:53.505044       1 main.go:299] handling current node
	I1004 03:23:03.499418       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:03.499672       1 main.go:299] handling current node
	I1004 03:23:13.496364       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:13.496468       1 main.go:299] handling current node
	I1004 03:23:23.496482       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:23.496678       1 main.go:299] handling current node
	I1004 03:23:33.500033       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:33.500062       1 main.go:299] handling current node
	I1004 03:23:43.506196       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:43.506328       1 main.go:299] handling current node
	I1004 03:23:53.496551       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:23:53.496591       1 main.go:299] handling current node
	I1004 03:24:03.505646       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:03.505701       1 main.go:299] handling current node
	I1004 03:24:13.496991       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:13.497131       1 main.go:299] handling current node
	I1004 03:24:23.497292       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:23.497564       1 main.go:299] handling current node
	I1004 03:24:33.496722       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:33.496843       1 main.go:299] handling current node
	I1004 03:24:43.498582       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:43.499114       1 main.go:299] handling current node
	I1004 03:24:53.496563       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:53.496603       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:12:09.149208       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:12:09.625553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="382.797202ms"
	I1004 03:12:09.652060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="26.447748ms"
	I1004 03:12:09.652146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.664µs"
	I1004 03:12:27.637851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:12:27.645306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:12:27.652024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.708µs"
	I1004 03:12:27.653069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="120.983µs"
	I1004 03:12:27.664030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.581µs"
	I1004 03:12:27.672509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.583µs"
	I1004 03:12:28.469429       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1004 03:12:29.593099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.212µs"
	I1004 03:12:29.614886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.4063ms"
	I1004 03:12:29.615539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.139µs"
	I1004 03:12:29.623230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.7267ms"
	I1004 03:12:29.623981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.813µs"
	I1004 03:12:36.259697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:13:28.026340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.224984ms"
	I1004 03:13:28.041091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.702731ms"
	I1004 03:13:28.041373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.492µs"
	I1004 03:13:35.064365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.376356ms"
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:20:05 ha-214000 kubelet[2148]: E1004 03:20:05.385800    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:20:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:20:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:20:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:20:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:21:05 ha-214000 kubelet[2148]: E1004 03:21:05.386512    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:22:04 ha-214000 kubelet[2148]: E1004 03:22:04.973240    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:04 ha-214000 kubelet[2148]: E1004 03:23:04.972777    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:04 ha-214000 kubelet[2148]: E1004 03:24:04.972871    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [792bd20fa10c] <==
	I1004 03:12:28.163172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 03:12:28.169508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 03:12:28.169598       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 03:12:28.174405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 03:12:28.174568       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a!
	I1004 03:12:28.175244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9362f6e1-9a8a-4609-b2ed-601ec5b3e435", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a became leader
	I1004 03:12:28.280846       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-214000_0ebda79f-e7d9-4c19-9561-b3afe90aee2a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-9tvdj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9xkpc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9xkpc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  82s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  82s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (3.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-214000 -v=7 --alsologtostderr
E1003 20:25:10.880873    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-214000 -v=7 --alsologtostderr: (54.745536181s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (362.602463ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:25:52.548640    4130 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:25:52.548988    4130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:25:52.548993    4130 out.go:358] Setting ErrFile to fd 2...
	I1003 20:25:52.548997    4130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:25:52.549175    4130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:25:52.549376    4130 out.go:352] Setting JSON to false
	I1003 20:25:52.549398    4130 mustload.go:65] Loading cluster: ha-214000
	I1003 20:25:52.549433    4130 notify.go:220] Checking for updates...
	I1003 20:25:52.549777    4130 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:25:52.549795    4130 status.go:174] checking status of ha-214000 ...
	I1003 20:25:52.550265    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.550306    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.561705    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51104
	I1003 20:25:52.562023    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.562435    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.562446    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.562716    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.562859    4130 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:25:52.562968    4130 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:25:52.563048    4130 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:25:52.564133    4130 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:25:52.564151    4130 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:25:52.564421    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.564453    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.575700    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51106
	I1003 20:25:52.576154    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.576519    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.576539    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.576781    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.576901    4130 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:25:52.577014    4130 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:25:52.577287    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.577313    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.588416    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51108
	I1003 20:25:52.588757    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.589105    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.589124    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.589336    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.589436    4130 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:25:52.589602    4130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:25:52.589623    4130 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:25:52.589701    4130 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:25:52.589777    4130 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:25:52.589857    4130 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:25:52.589950    4130 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:25:52.622728    4130 ssh_runner.go:195] Run: systemctl --version
	I1003 20:25:52.627010    4130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:25:52.638215    4130 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:25:52.638241    4130 api_server.go:166] Checking apiserver status ...
	I1003 20:25:52.638291    4130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:25:52.654396    4130 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:25:52.662236    4130 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:25:52.662304    4130 ssh_runner.go:195] Run: ls
	I1003 20:25:52.665440    4130 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:25:52.669293    4130 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:25:52.669304    4130 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:25:52.669310    4130 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:25:52.669322    4130 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:25:52.669608    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.669628    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.680991    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51112
	I1003 20:25:52.681302    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.681635    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.681651    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.681868    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.681982    4130 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:25:52.682063    4130 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:25:52.682155    4130 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:25:52.683234    4130 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:25:52.683241    4130 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:25:52.683497    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.683524    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.694277    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51114
	I1003 20:25:52.694565    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.694919    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.694939    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.695135    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.695233    4130 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:25:52.695324    4130 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:25:52.695589    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.695612    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.706293    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51116
	I1003 20:25:52.706677    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.707043    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.707054    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.707294    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.707418    4130 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:25:52.707573    4130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:25:52.707584    4130 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:25:52.707676    4130 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:25:52.707790    4130 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:25:52.707887    4130 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:25:52.707991    4130 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:25:52.741225    4130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:25:52.752766    4130 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:25:52.752779    4130 api_server.go:166] Checking apiserver status ...
	I1003 20:25:52.752829    4130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:25:52.763241    4130 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:25:52.763252    4130 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:25:52.763257    4130 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:25:52.763266    4130 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:25:52.763561    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.763585    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.774506    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51119
	I1003 20:25:52.774821    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.775178    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.775194    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.775415    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.775543    4130 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:25:52.775637    4130 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:25:52.775724    4130 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:25:52.776819    4130 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:25:52.776827    4130 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:25:52.777079    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.777104    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.787922    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51121
	I1003 20:25:52.788251    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.788580    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.788592    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.788805    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.788973    4130 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:25:52.789082    4130 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:25:52.789348    4130 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:52.789373    4130 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:52.800221    4130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51123
	I1003 20:25:52.800535    4130 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:52.800893    4130 main.go:141] libmachine: Using API Version  1
	I1003 20:25:52.800912    4130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:52.801156    4130 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:52.801297    4130 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:25:52.801460    4130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:25:52.801473    4130 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:25:52.801571    4130 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:25:52.801678    4130 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:25:52.801765    4130 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:25:52.801876    4130 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:25:52.837245    4130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:25:52.848747    4130 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.044652788s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         13 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     13 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         13 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         13 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         13 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         13 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x3 over 13m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x3 over 13m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      33s
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x2 over 34s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x2 over 34s)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x2 over 34s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                4s                 kubelet          Node ha-214000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	
	
	==> kernel <==
	 03:25:54 up 14 min,  0 users,  load average: 0.28, 0.23, 0.18
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:24:43.498582       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:43.499114       1 main.go:299] handling current node
	I1004 03:24:53.496563       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:53.496603       1 main.go:299] handling current node
	I1004 03:25:03.497896       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:03.498084       1 main.go:299] handling current node
	I1004 03:25:13.497129       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:13.497481       1 main.go:299] handling current node
	I1004 03:25:23.497073       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:23.497897       1 main.go:299] handling current node
	I1004 03:25:23.498093       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:23.498141       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:23.498481       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.7 Flags: [] Table: 0} 
	I1004 03:25:33.496961       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:33.497126       1 main.go:299] handling current node
	I1004 03:25:33.497275       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:33.497400       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:43.501426       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:43.501712       1 main.go:299] handling current node
	I1004 03:25:43.502096       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:43.502292       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:53.496447       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:53.496678       1 main.go:299] handling current node
	I1004 03:25:53.496782       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:53.496864       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:12:36.259697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:13:28.026340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.224984ms"
	I1004 03:13:28.041091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.702731ms"
	I1004 03:13:28.041373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.492µs"
	I1004 03:13:35.064365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.376356ms"
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:21:05 ha-214000 kubelet[2148]: E1004 03:21:05.386512    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:22:04 ha-214000 kubelet[2148]: E1004 03:22:04.973240    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:04 ha-214000 kubelet[2148]: E1004 03:23:04.972777    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:04 ha-214000 kubelet[2148]: E1004 03:24:04.972871    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:25:04 ha-214000 kubelet[2148]: E1004 03:25:04.972452    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-9tvdj busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-9tvdj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-214000-m03/192.169.0.7
	Start Time:       Thu, 03 Oct 2024 20:25:50 -0700
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9xkpc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9xkpc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m20s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Normal   Scheduled         5s                   default-scheduler  Successfully assigned default/busybox-7dff88458-9tvdj to ha-214000-m03
	  Normal   Pulling           5s                   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28"
	
	
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m20s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5s                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (57.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (3.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:305: expected profile "ha-214000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPor
t\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":\"
v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":
false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwar
ePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-214000" in json of 'profile list' to have "HAppy" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIS
erverPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVers
ion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-p
olicy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQem
uFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.346582979s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         13 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     13 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         13 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         13 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         13 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         13 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x3 over 13m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x3 over 13m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      36s
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  37s (x2 over 37s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x2 over 37s)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x2 over 37s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                7s                 kubelet          Node ha-214000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	
	
	==> kernel <==
	 03:25:57 up 14 min,  0 users,  load average: 0.25, 0.22, 0.18
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:24:43.498582       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:43.499114       1 main.go:299] handling current node
	I1004 03:24:53.496563       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:53.496603       1 main.go:299] handling current node
	I1004 03:25:03.497896       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:03.498084       1 main.go:299] handling current node
	I1004 03:25:13.497129       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:13.497481       1 main.go:299] handling current node
	I1004 03:25:23.497073       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:23.497897       1 main.go:299] handling current node
	I1004 03:25:23.498093       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:23.498141       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:23.498481       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.7 Flags: [] Table: 0} 
	I1004 03:25:33.496961       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:33.497126       1 main.go:299] handling current node
	I1004 03:25:33.497275       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:33.497400       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:43.501426       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:43.501712       1 main.go:299] handling current node
	I1004 03:25:43.502096       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:43.502292       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:53.496447       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:53.496678       1 main.go:299] handling current node
	I1004 03:25:53.496782       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:53.496864       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:28.041091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.702731ms"
	I1004 03:13:28.041373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.492µs"
	I1004 03:13:35.064365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.376356ms"
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:21:05 ha-214000 kubelet[2148]: E1004 03:21:05.386512    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:22:04 ha-214000 kubelet[2148]: E1004 03:22:04.973240    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:04 ha-214000 kubelet[2148]: E1004 03:23:04.972777    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:04 ha-214000 kubelet[2148]: E1004 03:24:04.972871    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:25:04 ha-214000 kubelet[2148]: E1004 03:25:04.972452    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m23s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (3.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (2.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status --output json -v=7 --alsologtostderr: exit status 2 (352.90294ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-214000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-214000-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-214000-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:25:58.864585    4187 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:25:58.864813    4187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:25:58.864818    4187 out.go:358] Setting ErrFile to fd 2...
	I1003 20:25:58.864822    4187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:25:58.864999    4187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:25:58.865185    4187 out.go:352] Setting JSON to true
	I1003 20:25:58.865209    4187 mustload.go:65] Loading cluster: ha-214000
	I1003 20:25:58.865259    4187 notify.go:220] Checking for updates...
	I1003 20:25:58.865581    4187 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:25:58.865601    4187 status.go:174] checking status of ha-214000 ...
	I1003 20:25:58.866028    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:58.866077    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:58.877729    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51201
	I1003 20:25:58.878052    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:58.878487    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:58.878503    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:58.878701    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:58.878800    4187 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:25:58.878872    4187 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:25:58.878937    4187 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:25:58.880045    4187 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:25:58.880063    4187 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:25:58.880310    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:58.880345    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:58.891053    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51203
	I1003 20:25:58.891374    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:58.891698    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:58.891707    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:58.891974    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:58.892108    4187 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:25:58.892202    4187 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:25:58.892480    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:58.892508    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:58.903244    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51205
	I1003 20:25:58.903547    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:58.903961    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:58.903976    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:58.904186    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:58.904299    4187 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:25:58.904462    4187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:25:58.904483    4187 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:25:58.904555    4187 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:25:58.904656    4187 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:25:58.904765    4187 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:25:58.904867    4187 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:25:58.937941    4187 ssh_runner.go:195] Run: systemctl --version
	I1003 20:25:58.942200    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:25:58.952858    4187 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:25:58.952882    4187 api_server.go:166] Checking apiserver status ...
	I1003 20:25:58.952930    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:25:58.964205    4187 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:25:58.971239    4187 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:25:58.971291    4187 ssh_runner.go:195] Run: ls
	I1003 20:25:58.974422    4187 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:25:58.977929    4187 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:25:58.977939    4187 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:25:58.977945    4187 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:25:58.977955    4187 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:25:58.978204    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:58.978226    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:58.989131    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51209
	I1003 20:25:58.989487    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:58.989848    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:58.989869    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:58.990157    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:58.990283    4187 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:25:58.990382    4187 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:25:58.990457    4187 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:25:58.991590    4187 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:25:58.991598    4187 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:25:58.991863    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:58.991886    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:59.002871    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51211
	I1003 20:25:59.003197    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:59.003537    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:59.003552    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:59.003760    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:59.003875    4187 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:25:59.003967    4187 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:25:59.004235    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:59.004259    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:59.014953    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51213
	I1003 20:25:59.015265    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:59.015621    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:59.015637    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:59.015843    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:59.015956    4187 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:25:59.016110    4187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:25:59.016123    4187 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:25:59.016203    4187 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:25:59.016287    4187 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:25:59.016358    4187 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:25:59.016429    4187 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:25:59.049285    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:25:59.060714    4187 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:25:59.060730    4187 api_server.go:166] Checking apiserver status ...
	I1003 20:25:59.060782    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:25:59.071444    4187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:25:59.071455    4187 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:25:59.071460    4187 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:25:59.071469    4187 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:25:59.071735    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:59.071757    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:59.082930    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51216
	I1003 20:25:59.083244    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:59.083559    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:59.083569    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:59.083793    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:59.083913    4187 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:25:59.084004    4187 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:25:59.084072    4187 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:25:59.085206    4187 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:25:59.085214    4187 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:25:59.085464    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:59.085515    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:59.096461    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51218
	I1003 20:25:59.096789    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:59.097168    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:59.097183    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:59.097412    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:59.097525    4187 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:25:59.097619    4187 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:25:59.097887    4187 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:25:59.097911    4187 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:25:59.108559    4187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51220
	I1003 20:25:59.108959    4187 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:25:59.109310    4187 main.go:141] libmachine: Using API Version  1
	I1003 20:25:59.109321    4187 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:25:59.109519    4187 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:25:59.109618    4187 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:25:59.109749    4187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:25:59.109760    4187 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:25:59.109835    4187 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:25:59.109967    4187 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:25:59.110049    4187 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:25:59.110120    4187 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:25:59.144234    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:25:59.155600    4187 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-214000 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.076575906s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         13 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     13 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         14 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         14 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         14 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         14 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      39s
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           37s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                10s                kubelet          Node ha-214000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	
	
	==> kernel <==
	 03:26:00 up 14 min,  0 users,  load average: 0.55, 0.29, 0.20
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:24:43.498582       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:43.499114       1 main.go:299] handling current node
	I1004 03:24:53.496563       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:24:53.496603       1 main.go:299] handling current node
	I1004 03:25:03.497896       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:03.498084       1 main.go:299] handling current node
	I1004 03:25:13.497129       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:13.497481       1 main.go:299] handling current node
	I1004 03:25:23.497073       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:23.497897       1 main.go:299] handling current node
	I1004 03:25:23.498093       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:23.498141       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:23.498481       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.7 Flags: [] Table: 0} 
	I1004 03:25:33.496961       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:33.497126       1 main.go:299] handling current node
	I1004 03:25:33.497275       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:33.497400       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:43.501426       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:43.501712       1 main.go:299] handling current node
	I1004 03:25:43.502096       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:43.502292       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:53.496447       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:53.496678       1 main.go:299] handling current node
	I1004 03:25:53.496782       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:53.496864       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:28.041091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.702731ms"
	I1004 03:13:28.041373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.492µs"
	I1004 03:13:35.064365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.376356ms"
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:21:05 ha-214000 kubelet[2148]: E1004 03:21:05.386512    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:21:05 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:21:05 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:22:04 ha-214000 kubelet[2148]: E1004 03:22:04.973240    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:04 ha-214000 kubelet[2148]: E1004 03:23:04.972777    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:04 ha-214000 kubelet[2148]: E1004 03:24:04.972871    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:25:04 ha-214000 kubelet[2148]: E1004 03:25:04.972452    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m26s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  2s (x2 over 11s)     default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (2.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 node stop m02 -v=7 --alsologtostderr: (8.356671236s)
ha_test.go:371: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 7 (277.897043ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:26:10.201287    4218 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:26:10.201526    4218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:26:10.201532    4218 out.go:358] Setting ErrFile to fd 2...
	I1003 20:26:10.201536    4218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:26:10.201712    4218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:26:10.201890    4218 out.go:352] Setting JSON to false
	I1003 20:26:10.201915    4218 mustload.go:65] Loading cluster: ha-214000
	I1003 20:26:10.201972    4218 notify.go:220] Checking for updates...
	I1003 20:26:10.202278    4218 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:26:10.202297    4218 status.go:174] checking status of ha-214000 ...
	I1003 20:26:10.202752    4218 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:10.202806    4218 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:10.214111    4218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51252
	I1003 20:26:10.214451    4218 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:10.214861    4218 main.go:141] libmachine: Using API Version  1
	I1003 20:26:10.214892    4218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:10.215093    4218 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:10.215199    4218 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:26:10.215289    4218 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:10.215355    4218 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:26:10.216405    4218 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:26:10.216425    4218 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:26:10.216667    4218 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:10.216686    4218 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:10.227515    4218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51254
	I1003 20:26:10.227830    4218 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:10.228169    4218 main.go:141] libmachine: Using API Version  1
	I1003 20:26:10.228185    4218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:10.228419    4218 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:10.228532    4218 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:26:10.228631    4218 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:26:10.228896    4218 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:10.228918    4218 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:10.239767    4218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51256
	I1003 20:26:10.240100    4218 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:10.240428    4218 main.go:141] libmachine: Using API Version  1
	I1003 20:26:10.240444    4218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:10.240647    4218 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:10.240756    4218 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:26:10.240914    4218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:26:10.240935    4218 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:26:10.241027    4218 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:26:10.241115    4218 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:26:10.241209    4218 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:26:10.241294    4218 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:26:10.275164    4218 ssh_runner.go:195] Run: systemctl --version
	I1003 20:26:10.279603    4218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:26:10.291321    4218 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:26:10.291345    4218 api_server.go:166] Checking apiserver status ...
	I1003 20:26:10.291397    4218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:26:10.303034    4218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:26:10.311196    4218 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:26:10.311253    4218 ssh_runner.go:195] Run: ls
	I1003 20:26:10.314413    4218 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:26:10.317754    4218 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:26:10.317766    4218 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:26:10.317772    4218 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:26:10.317782    4218 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:26:10.318036    4218 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:10.318061    4218 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:10.329150    4218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51260
	I1003 20:26:10.329469    4218 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:10.329826    4218 main.go:141] libmachine: Using API Version  1
	I1003 20:26:10.329847    4218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:10.330057    4218 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:10.330180    4218 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:26:10.330263    4218 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:10.330337    4218 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:26:10.331425    4218 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 3812 missing from process table
	I1003 20:26:10.331449    4218 status.go:371] ha-214000-m02 host status = "Stopped" (err=<nil>)
	I1003 20:26:10.331456    4218 status.go:384] host is not running, skipping remaining checks
	I1003 20:26:10.331460    4218 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:26:10.331472    4218 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:26:10.331755    4218 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:10.331787    4218 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:10.342663    4218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51262
	I1003 20:26:10.343009    4218 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:10.343337    4218 main.go:141] libmachine: Using API Version  1
	I1003 20:26:10.343345    4218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:10.343569    4218 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:10.343698    4218 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:26:10.343788    4218 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:10.343885    4218 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:26:10.344962    4218 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:26:10.344971    4218 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:26:10.345238    4218 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:10.345261    4218 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:10.356507    4218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51264
	I1003 20:26:10.356842    4218 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:10.357289    4218 main.go:141] libmachine: Using API Version  1
	I1003 20:26:10.357323    4218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:10.357569    4218 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:10.357689    4218 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:26:10.357785    4218 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:26:10.358065    4218 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:10.358087    4218 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:10.368983    4218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51266
	I1003 20:26:10.369335    4218 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:10.369656    4218 main.go:141] libmachine: Using API Version  1
	I1003 20:26:10.369667    4218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:10.369874    4218 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:10.369992    4218 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:26:10.370149    4218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:26:10.370170    4218 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:26:10.370268    4218 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:26:10.370368    4218 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:26:10.370482    4218 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:26:10.370569    4218 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:26:10.403728    4218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:26:10.416567    4218 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr": ha-214000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-214000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-214000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr": ha-214000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-214000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-214000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr": ha-214000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-214000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-214000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr": ha-214000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-214000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-214000-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.074803335s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         14 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     14 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         14 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         14 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         14 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         14 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:26:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:26:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      50s
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  NodeHasSufficientMemory  51s (x2 over 51s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x2 over 51s)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x2 over 51s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           48s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                21s                kubelet          Node ha-214000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	
	
	==> kernel <==
	 03:26:12 up 14 min,  0 users,  load average: 0.47, 0.28, 0.20
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:25:03.497896       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:03.498084       1 main.go:299] handling current node
	I1004 03:25:13.497129       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:13.497481       1 main.go:299] handling current node
	I1004 03:25:23.497073       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:23.497897       1 main.go:299] handling current node
	I1004 03:25:23.498093       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:23.498141       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:23.498481       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.7 Flags: [] Table: 0} 
	I1004 03:25:33.496961       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:33.497126       1 main.go:299] handling current node
	I1004 03:25:33.497275       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:33.497400       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:43.501426       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:43.501712       1 main.go:299] handling current node
	I1004 03:25:43.502096       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:43.502292       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:53.496447       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:53.496678       1 main.go:299] handling current node
	I1004 03:25:53.496782       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:53.496864       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:26:03.500809       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:26:03.500955       1 main.go:299] handling current node
	I1004 03:26:03.501130       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:26:03.501308       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:28.041091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.702731ms"
	I1004 03:13:28.041373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.492µs"
	I1004 03:13:35.064365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.376356ms"
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:22:04 ha-214000 kubelet[2148]: E1004 03:22:04.973240    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:04 ha-214000 kubelet[2148]: E1004 03:23:04.972777    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:04 ha-214000 kubelet[2148]: E1004 03:24:04.972871    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:25:04 ha-214000 kubelet[2148]: E1004 03:25:04.972452    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:26:04 ha-214000 kubelet[2148]: E1004 03:26:04.973468    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:26:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:26:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:26:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:26:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m38s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  14s (x2 over 23s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (11.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:415: expected profile "ha-214000" in json of 'profile list' to have "Degraded" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"A
PIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesV
ersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-securit
y-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"Custom
QemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.113096444s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         13 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         14 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     14 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         14 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         14 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         14 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         14 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:26:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:23:48 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:26:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:25:50 +0000   Fri, 04 Oct 2024 03:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      53s
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  54s (x2 over 54s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x2 over 54s)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x2 over 54s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           51s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                24s                kubelet          Node ha-214000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.474228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.474262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:12:00.479105Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:12:00.479194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	
	
	==> kernel <==
	 03:26:15 up 14 min,  0 users,  load average: 0.47, 0.28, 0.20
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:25:23.497073       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:23.497897       1 main.go:299] handling current node
	I1004 03:25:23.498093       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:23.498141       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:23.498481       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.7 Flags: [] Table: 0} 
	I1004 03:25:33.496961       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:33.497126       1 main.go:299] handling current node
	I1004 03:25:33.497275       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:33.497400       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:43.501426       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:43.501712       1 main.go:299] handling current node
	I1004 03:25:43.502096       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:43.502292       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:25:53.496447       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:25:53.496678       1 main.go:299] handling current node
	I1004 03:25:53.496782       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:25:53.496864       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:26:03.500809       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:26:03.500955       1 main.go:299] handling current node
	I1004 03:26:03.501130       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:26:03.501308       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:26:13.497046       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:26:13.497074       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:26:13.497430       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:26:13.497522       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:28.041091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.702731ms"
	I1004 03:13:28.041373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.492µs"
	I1004 03:13:35.064365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.376356ms"
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:22:04 ha-214000 kubelet[2148]: E1004 03:22:04.973240    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:22:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:22:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:04 ha-214000 kubelet[2148]: E1004 03:23:04.972777    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:04 ha-214000 kubelet[2148]: E1004 03:24:04.972871    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:25:04 ha-214000 kubelet[2148]: E1004 03:25:04.972452    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:25:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:25:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:26:04 ha-214000 kubelet[2148]: E1004 03:26:04.973468    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:26:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:26:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:26:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:26:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m41s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  17s (x2 over 26s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (314.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 node start m02 -v=7 --alsologtostderr
E1003 20:28:01.996466    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:30:10.882617    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 node start m02 -v=7 --alsologtostderr: exit status 80 (4m21.791319709s)

                                                
                                                
-- stdout --
	* Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	* Restarting existing hyperkit VM for "ha-214000-m02" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:26:16.094310    4270 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:26:16.094668    4270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:26:16.094674    4270 out.go:358] Setting ErrFile to fd 2...
	I1003 20:26:16.094678    4270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:26:16.094867    4270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:26:16.095200    4270 mustload.go:65] Loading cluster: ha-214000
	I1003 20:26:16.095542    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:26:16.095898    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:16.095936    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:16.106603    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51337
	I1003 20:26:16.106967    4270 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:16.107385    4270 main.go:141] libmachine: Using API Version  1
	I1003 20:26:16.107397    4270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:16.107610    4270 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:16.107744    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:26:16.107879    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:16.107910    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:26:16.109020    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 3812 missing from process table
	W1003 20:26:16.109044    4270 host.go:58] "ha-214000-m02" host status: Stopped
	I1003 20:26:16.130540    4270 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:26:16.151383    4270 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:26:16.151444    4270 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:26:16.151465    4270 cache.go:56] Caching tarball of preloaded images
	I1003 20:26:16.151669    4270 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:26:16.151683    4270 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:26:16.151812    4270 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:26:16.152412    4270 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:26:16.152540    4270 start.go:364] duration metric: took 78.245µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:26:16.152556    4270 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:26:16.152568    4270 fix.go:54] fixHost starting: m02
	I1003 20:26:16.152804    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:16.152822    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:16.163733    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51339
	I1003 20:26:16.164093    4270 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:16.164468    4270 main.go:141] libmachine: Using API Version  1
	I1003 20:26:16.164494    4270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:16.164731    4270 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:16.164844    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:16.164952    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:26:16.165034    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:16.165104    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:26:16.166191    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 3812 missing from process table
	I1003 20:26:16.166215    4270 fix.go:112] recreateIfNeeded on ha-214000-m02: state=Stopped err=<nil>
	I1003 20:26:16.166231    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	W1003 20:26:16.166320    4270 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:26:16.187605    4270 out.go:177] * Restarting existing hyperkit VM for "ha-214000-m02" ...
	I1003 20:26:16.208141    4270 main.go:141] libmachine: (ha-214000-m02) Calling .Start
	I1003 20:26:16.208435    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:16.208487    4270 main.go:141] libmachine: (ha-214000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:26:16.210280    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 3812 missing from process table
	I1003 20:26:16.210292    4270 main.go:141] libmachine: (ha-214000-m02) DBG | pid 3812 is in state "Stopped"
	I1003 20:26:16.210310    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid...
	I1003 20:26:16.210589    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:26:16.236202    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:26:16.236228    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:26:16.236381    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041c8a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:26:16.236410    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041c8a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:26:16.236468    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:26:16.236503    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:26:16.236514    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:26:16.237892    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Pid is 4274
	I1003 20:26:16.238320    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:26:16.238347    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:16.238441    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:26:16.240120    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:26:16.240221    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:26:16.240234    4270 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 20:26:16.240249    4270 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:26:16.240262    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:26:16.240273    4270 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:26:16.240317    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:26:16.241056    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:26:16.241242    4270 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:26:16.241677    4270 machine.go:93] provisionDockerMachine start ...
	I1003 20:26:16.241687    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:16.241796    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:16.241906    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:16.242018    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:16.242125    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:16.242223    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:16.242361    4270 main.go:141] libmachine: Using SSH client type: native
	I1003 20:26:16.242543    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:26:16.242552    4270 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:26:16.248317    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:26:16.257128    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:26:16.258139    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:26:16.258158    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:26:16.258186    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:26:16.258200    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:26:16.642174    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:26:16.642193    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:26:16.756948    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:26:16.756964    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:26:16.756982    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:26:16.756993    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:26:16.757857    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:26:16.757868    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:26:22.342445    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:26:22.342460    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:26:22.342468    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:26:22.366499    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:26:29.405110    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:26:29.405123    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:26:29.405266    4270 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:26:29.405278    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:26:29.405375    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:29.405453    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:29.405538    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.405631    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.405706    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:29.405863    4270 main.go:141] libmachine: Using SSH client type: native
	I1003 20:26:29.405996    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:26:29.406004    4270 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:26:29.471653    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:26:29.471672    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:29.471810    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:29.471914    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.472009    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.472088    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:29.472223    4270 main.go:141] libmachine: Using SSH client type: native
	I1003 20:26:29.472372    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:26:29.472383    4270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:26:29.535149    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:26:29.535169    4270 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:26:29.535190    4270 buildroot.go:174] setting up certificates
	I1003 20:26:29.535203    4270 provision.go:84] configureAuth start
	I1003 20:26:29.535213    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:26:29.535354    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:26:29.535428    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:29.535517    4270 provision.go:143] copyHostCerts
	I1003 20:26:29.535547    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:26:29.535630    4270 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:26:29.535638    4270 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:26:29.535781    4270 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:26:29.535997    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:26:29.536053    4270 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:26:29.536058    4270 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:26:29.536145    4270 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:26:29.536321    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:26:29.536370    4270 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:26:29.536375    4270 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:26:29.536460    4270 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:26:29.536624    4270 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:26:29.693826    4270 provision.go:177] copyRemoteCerts
	I1003 20:26:29.693897    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:26:29.693917    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:29.694071    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:29.694173    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.694271    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:29.694363    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:26:29.727948    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:26:29.728021    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:26:29.747586    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:26:29.747651    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:26:29.766954    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:26:29.767024    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:26:29.786607    4270 provision.go:87] duration metric: took 251.382216ms to configureAuth
	I1003 20:26:29.786621    4270 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:26:29.786796    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:26:29.786809    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:29.786957    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:29.787053    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:29.787166    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.787250    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.787336    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:29.787462    4270 main.go:141] libmachine: Using SSH client type: native
	I1003 20:26:29.787583    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:26:29.787591    4270 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:26:29.840605    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:26:29.840617    4270 buildroot.go:70] root file system type: tmpfs
	I1003 20:26:29.840708    4270 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:26:29.840724    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:29.840854    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:29.840946    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.841023    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.841115    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:29.841259    4270 main.go:141] libmachine: Using SSH client type: native
	I1003 20:26:29.841399    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:26:29.841445    4270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:26:29.905639    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:26:29.905660    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:29.905801    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:29.905895    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.905998    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:29.906109    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:29.906261    4270 main.go:141] libmachine: Using SSH client type: native
	I1003 20:26:29.906395    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:26:29.906407    4270 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:26:31.455013    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:26:31.455027    4270 machine.go:96] duration metric: took 15.213214447s to provisionDockerMachine
	I1003 20:26:31.455041    4270 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:26:31.455051    4270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:26:31.455061    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:31.455256    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:26:31.455268    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:31.455362    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:31.455457    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:31.455546    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:31.455639    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:26:31.493036    4270 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:26:31.496275    4270 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:26:31.496291    4270 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:26:31.496452    4270 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:26:31.496697    4270 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:26:31.496705    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:26:31.497002    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:26:31.511416    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:26:31.545413    4270 start.go:296] duration metric: took 90.36222ms for postStartSetup
	I1003 20:26:31.545436    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:31.545649    4270 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:26:31.545663    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:31.545749    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:31.545844    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:31.545924    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:31.546030    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:26:31.580346    4270 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:26:31.580423    4270 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:26:31.634178    4270 fix.go:56] duration metric: took 15.481476216s for fixHost
	I1003 20:26:31.634211    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:31.634478    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:31.634677    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:31.634868    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:31.635053    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:31.635309    4270 main.go:141] libmachine: Using SSH client type: native
	I1003 20:26:31.635552    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:26:31.635566    4270 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:26:31.691103    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012391.795698782
	
	I1003 20:26:31.691114    4270 fix.go:216] guest clock: 1728012391.795698782
	I1003 20:26:31.691120    4270 fix.go:229] Guest: 2024-10-03 20:26:31.795698782 -0700 PDT Remote: 2024-10-03 20:26:31.634197 -0700 PDT m=+15.578796913 (delta=161.501782ms)
	I1003 20:26:31.691144    4270 fix.go:200] guest clock delta is within tolerance: 161.501782ms
	I1003 20:26:31.691148    4270 start.go:83] releasing machines lock for "ha-214000-m02", held for 15.538470572s
	I1003 20:26:31.691164    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:31.691307    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:26:31.691414    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:31.691697    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:31.691819    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:26:31.691915    4270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:26:31.691950    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:31.691967    4270 ssh_runner.go:195] Run: systemctl --version
	I1003 20:26:31.692007    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:26:31.692041    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:31.692117    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:31.692136    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:26:31.692227    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:31.692239    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:26:31.692321    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:26:31.692331    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:26:31.692403    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:26:31.722616    4270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:26:31.770640    4270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:26:31.770856    4270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:26:31.784455    4270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:26:31.784468    4270 start.go:495] detecting cgroup driver to use...
	I1003 20:26:31.784584    4270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:26:31.799412    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:26:31.807498    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:26:31.815748    4270 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:26:31.815817    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:26:31.824099    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:26:31.832286    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:26:31.840305    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:26:31.848917    4270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:26:31.857359    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:26:31.865510    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:26:31.873481    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:26:31.881641    4270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:26:31.888965    4270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:26:31.889015    4270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:26:31.898246    4270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:26:31.906827    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:26:32.002621    4270 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:26:32.021402    4270 start.go:495] detecting cgroup driver to use...
	I1003 20:26:32.021505    4270 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:26:32.036526    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:26:32.047548    4270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:26:32.060233    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:26:32.070902    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:26:32.081290    4270 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:26:32.104056    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:26:32.114459    4270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:26:32.130649    4270 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:26:32.133790    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:26:32.141893    4270 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:26:32.156581    4270 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:26:32.260622    4270 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:26:32.356193    4270 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:26:32.356275    4270 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:26:32.370080    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:26:32.473479    4270 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:26:34.731450    4270 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.257926757s)
	I1003 20:26:34.731528    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:26:34.741867    4270 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:26:34.754869    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:26:34.765445    4270 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:26:34.857373    4270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:26:34.958922    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:26:35.075092    4270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:26:35.088788    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:26:35.099967    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:26:35.198267    4270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:26:35.264191    4270 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:26:35.264293    4270 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:26:35.268593    4270 start.go:563] Will wait 60s for crictl version
	I1003 20:26:35.268665    4270 ssh_runner.go:195] Run: which crictl
	I1003 20:26:35.271866    4270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:26:35.303430    4270 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:26:35.303513    4270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:26:35.319641    4270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:26:35.361206    4270 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:26:35.361254    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:26:35.361691    4270 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:26:35.366232    4270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:26:35.376610    4270 mustload.go:65] Loading cluster: ha-214000
	I1003 20:26:35.376777    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:26:35.377015    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:35.377037    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:35.388167    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51361
	I1003 20:26:35.388477    4270 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:35.388816    4270 main.go:141] libmachine: Using API Version  1
	I1003 20:26:35.388829    4270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:35.389022    4270 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:35.389130    4270 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:26:35.389218    4270 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:26:35.389288    4270 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:26:35.390336    4270 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:26:35.390606    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:26:35.390631    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:26:35.401450    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51363
	I1003 20:26:35.401756    4270 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:26:35.402115    4270 main.go:141] libmachine: Using API Version  1
	I1003 20:26:35.402132    4270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:26:35.402374    4270 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:26:35.402486    4270 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:26:35.402615    4270 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.6
	I1003 20:26:35.402624    4270 certs.go:194] generating shared ca certs ...
	I1003 20:26:35.402633    4270 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:26:35.402822    4270 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:26:35.402919    4270 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:26:35.402930    4270 certs.go:256] generating profile certs ...
	I1003 20:26:35.403037    4270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:26:35.403057    4270 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920
	I1003 20:26:35.403073    4270 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1003 20:26:35.527462    4270 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920 ...
	I1003 20:26:35.527478    4270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920: {Name:mkef92f8fe64da5a52183f4691a9cd9072b32341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:26:35.527967    4270 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920 ...
	I1003 20:26:35.527978    4270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920: {Name:mk4c4c7f0464367a22cee2cfb82e667e918da285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:26:35.528255    4270 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:26:35.528475    4270 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:26:35.528757    4270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:26:35.528766    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:26:35.528789    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:26:35.528807    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:26:35.528825    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:26:35.528846    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:26:35.528865    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:26:35.528884    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:26:35.528903    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:26:35.529003    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:26:35.529066    4270 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:26:35.529075    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:26:35.529104    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:26:35.529138    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:26:35.529170    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:26:35.529237    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:26:35.529268    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:26:35.529288    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:26:35.529312    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:26:35.529344    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:26:35.529492    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:26:35.529581    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:26:35.529678    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:26:35.529770    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:26:35.556752    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1003 20:26:35.562386    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1003 20:26:35.578464    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1003 20:26:35.581589    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1003 20:26:35.590755    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1003 20:26:35.594139    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1003 20:26:35.602981    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1003 20:26:35.605966    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1003 20:26:35.614900    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1003 20:26:35.618117    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1003 20:26:35.626220    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1003 20:26:35.629435    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1003 20:26:35.639509    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:26:35.658820    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:26:35.677757    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:26:35.696831    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:26:35.715849    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1003 20:26:35.735011    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:26:35.753741    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:26:35.772611    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:26:35.792089    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:26:35.812473    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:26:35.833117    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:26:35.853541    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1003 20:26:35.867510    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1003 20:26:35.881097    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1003 20:26:35.894618    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1003 20:26:35.908200    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1003 20:26:35.921614    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1003 20:26:35.934945    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1003 20:26:35.948907    4270 ssh_runner.go:195] Run: openssl version
	I1003 20:26:35.953247    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:26:35.962348    4270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:26:35.965726    4270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:26:35.965789    4270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:26:35.970003    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:26:35.979554    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:26:35.988693    4270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:26:35.992058    4270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:26:35.992110    4270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:26:35.996436    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:26:36.005810    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:26:36.015391    4270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:26:36.018784    4270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:26:36.018843    4270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:26:36.023085    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:26:36.032401    4270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:26:36.035490    4270 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:26:36.035539    4270 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I1003 20:26:36.035623    4270 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:26:36.035646    4270 kube-vip.go:115] generating kube-vip config ...
	I1003 20:26:36.035697    4270 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:26:36.049338    4270 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:26:36.049427    4270 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:26:36.049500    4270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:26:36.057830    4270 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1003 20:26:36.057889    4270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1003 20:26:36.066184    4270 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1003 20:26:36.066194    4270 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1003 20:26:36.066205    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1003 20:26:36.066200    4270 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1003 20:26:36.066206    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1003 20:26:36.066268    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:26:36.066330    4270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1003 20:26:36.066330    4270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1003 20:26:36.077835    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1003 20:26:36.077858    4270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1003 20:26:36.077878    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1003 20:26:36.078372    4270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1003 20:26:36.078403    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1003 20:26:36.078417    4270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1003 20:26:36.107533    4270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1003 20:26:36.107569    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1003 20:26:36.701744    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1003 20:26:36.709850    4270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1003 20:26:36.723416    4270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:26:36.736902    4270 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1003 20:26:36.750478    4270 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:26:36.753607    4270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:26:36.763824    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:26:36.875416    4270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:26:36.891236    4270 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:26:36.891256    4270 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:26:36.891405    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:26:36.912351    4270 out.go:177] * Verifying Kubernetes components...
	I1003 20:26:36.932843    4270 out.go:177] * Enabled addons: 
	I1003 20:26:36.954024    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:26:36.975027    4270 addons.go:510] duration metric: took 83.78169ms for enable addons: enabled=[]
	I1003 20:26:37.052530    4270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:26:37.728591    4270 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:26:37.728804    4270 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd6fef60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1003 20:26:37.728865    4270 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1003 20:26:37.729158    4270 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:26:37.729320    4270 node_ready.go:35] waiting up to 6m0s for node "ha-214000-m02" to be "Ready" ...
	I1003 20:26:37.729392    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:37.729398    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:37.729405    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:37.729409    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:37.735188    4270 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I1003 20:26:38.229501    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:38.229512    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:38.229518    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:38.229521    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:38.230915    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:38.729810    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:38.729826    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:38.729834    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:38.729838    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:38.731495    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:39.230689    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:39.230709    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:39.230720    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:39.230726    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:39.233204    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:39.730574    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:39.730596    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:39.730608    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:39.730616    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:39.732918    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:39.733123    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:40.230826    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:40.230852    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:40.230863    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:40.230871    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:40.233059    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:40.729994    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:40.730008    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:40.730014    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:40.730018    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:40.731770    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:41.229790    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:41.229806    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:41.229814    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:41.229820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:41.231953    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:41.730731    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:41.730757    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:41.730768    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:41.730775    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:41.733260    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:41.733407    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:42.230381    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:42.230395    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:42.230414    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:42.230418    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:42.231874    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:42.730977    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:42.731004    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:42.731018    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:42.731027    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:42.733610    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:43.229884    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:43.229899    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:43.229907    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:43.229914    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:43.231455    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:43.729598    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:43.729610    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:43.729616    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:43.729620    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:43.731133    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:44.230143    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:44.230164    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:44.230175    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:44.230181    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:44.232520    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:44.232585    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:44.729595    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:44.729608    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:44.729614    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:44.729617    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:44.731057    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:45.229799    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:45.229813    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:45.229820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:45.229823    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:45.231452    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:45.730563    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:45.730586    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:45.730599    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:45.730605    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:45.733174    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:46.230900    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:46.230920    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:46.230928    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:46.230932    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:46.232859    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:46.232911    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:46.729981    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:46.729997    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:46.730009    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:46.730014    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:46.731833    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:47.230142    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:47.230243    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:47.230256    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:47.230265    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:47.232737    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:47.729897    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:47.729917    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:47.729929    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:47.729937    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:47.732658    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:48.230797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:48.230813    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:48.230819    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:48.230823    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:48.232357    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:48.731106    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:48.731136    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:48.731228    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:48.731237    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:48.734167    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:48.734230    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:49.231020    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:49.231041    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:49.231053    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:49.231060    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:49.233486    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:49.730466    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:49.730481    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:49.730487    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:49.730491    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:49.732043    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:50.229891    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:50.229908    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:50.229916    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:50.229920    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:50.231601    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:50.731736    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:50.731753    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:50.731762    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:50.731768    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:50.733485    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:51.231102    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:51.231125    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:51.231137    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:51.231144    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:51.233710    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:51.233778    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:51.730797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:51.730892    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:51.730907    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:51.730915    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:51.733363    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:52.229704    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:52.229724    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:52.229735    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:52.229741    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:52.231527    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:52.729679    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:52.729692    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:52.729699    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:52.729702    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:52.731098    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:53.229721    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:53.229741    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:53.229753    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:53.229758    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:53.232514    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:53.729864    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:53.729891    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:53.729907    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:53.729918    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:53.732775    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:53.732840    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:54.229652    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:54.229663    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:54.229669    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:54.229673    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:54.231091    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:54.730145    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:54.730181    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:54.730194    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:54.730200    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:54.732644    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:55.230203    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:55.230214    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:55.230220    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:55.230223    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:55.231621    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:55.729957    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:55.729970    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:55.729977    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:55.729980    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:55.731485    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:56.230391    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:56.230420    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:56.230431    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:56.230440    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:56.232906    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:56.232988    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:56.729669    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:56.729684    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:56.729692    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:56.729703    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:56.731657    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:57.230520    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:57.230532    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:57.230538    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:57.230542    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:57.232069    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:57.730357    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:57.730377    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:57.730388    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:57.730394    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:57.732591    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:58.230113    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:58.230134    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:58.230146    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:58.230154    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:58.232354    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:58.730635    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:58.730648    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:58.730654    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:58.730658    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:58.732142    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:26:58.732200    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:26:59.229775    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:59.229795    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:59.229806    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:59.229814    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:59.232176    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:26:59.730545    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:26:59.730556    4270 round_trippers.go:469] Request Headers:
	I1003 20:26:59.730562    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:26:59.730565    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:26:59.732195    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:00.229703    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:00.229715    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:00.229721    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:00.229725    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:00.231144    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:00.729911    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:00.729928    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:00.729935    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:00.729939    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:00.731284    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:01.229662    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:01.229675    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:01.229681    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:01.229685    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:01.231067    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:01.231119    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:01.730405    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:01.730444    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:01.730452    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:01.730456    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:01.731842    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:02.230644    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:02.230665    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:02.230677    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:02.230684    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:02.232944    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:02.731756    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:02.731778    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:02.731789    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:02.731796    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:02.734359    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:03.230151    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:03.230164    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:03.230171    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:03.230174    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:03.231713    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:03.231785    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:03.729690    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:03.729701    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:03.729708    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:03.729711    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:03.731073    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:04.230224    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:04.230302    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:04.230312    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:04.230316    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:04.231980    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:04.730178    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:04.730191    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:04.730198    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:04.730200    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:04.731639    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:05.229681    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:05.229696    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:05.229702    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:05.229705    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:05.233821    4270 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I1003 20:27:05.233881    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:05.729989    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:05.730010    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:05.730020    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:05.730040    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:05.732563    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:06.230186    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:06.230198    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:06.230204    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:06.230207    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:06.231674    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:06.730933    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:06.730953    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:06.730965    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:06.730973    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:06.733274    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:07.231334    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:07.231355    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:07.231367    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:07.231373    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:07.234197    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:07.234350    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:07.730806    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:07.730819    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:07.730825    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:07.730829    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:07.732251    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:08.230982    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:08.231004    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:08.231016    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:08.231024    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:08.233611    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:08.729778    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:08.729791    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:08.729797    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:08.729801    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:08.731423    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:09.231035    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:09.231051    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:09.231057    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:09.231060    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:09.232644    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:09.729860    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:09.729880    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:09.729892    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:09.729898    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:09.732460    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:09.732526    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:10.229997    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:10.230017    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:10.230028    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:10.230037    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:10.232949    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:10.729791    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:10.729806    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:10.729813    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:10.729816    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:10.731329    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:11.230366    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:11.230380    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:11.230386    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:11.230390    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:11.231641    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:11.730767    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:11.730779    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:11.730784    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:11.730788    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:11.732440    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:12.230076    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:12.230093    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:12.230101    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:12.230104    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:12.231926    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:12.231994    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:12.729816    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:12.729830    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:12.729838    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:12.729842    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:12.731964    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:13.230596    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:13.230617    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:13.230630    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:13.230635    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:13.233311    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:13.730210    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:13.730273    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:13.730280    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:13.730285    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:13.731780    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:14.231160    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:14.231182    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:14.231242    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:14.231249    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:14.233160    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:14.233233    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:14.731199    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:14.731220    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:14.731232    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:14.731239    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:14.733896    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:15.230714    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:15.230733    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:15.230782    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:15.230785    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:15.232184    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:15.730178    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:15.730201    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:15.730216    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:15.730224    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:15.733014    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:16.229931    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:16.229953    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:16.229965    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:16.229971    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:16.232553    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:16.730921    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:16.730932    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:16.730939    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:16.730941    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:16.732632    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:16.732703    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:17.230513    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:17.230535    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:17.230547    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:17.230553    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:17.233138    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:17.730170    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:17.730195    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:17.730325    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:17.730340    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:17.732193    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:18.229835    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:18.229847    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:18.229854    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:18.229857    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:18.231266    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:18.730255    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:18.730366    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:18.730383    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:18.730390    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:18.732799    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:18.732877    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:19.231087    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:19.231114    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:19.231126    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:19.231134    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:19.233790    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:19.730038    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:19.730051    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:19.730057    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:19.730060    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:19.731563    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:20.230237    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:20.230345    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:20.230356    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:20.230361    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:20.232283    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:20.730974    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:20.730995    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:20.731006    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:20.731012    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:20.733607    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:20.733689    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:21.230170    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:21.230183    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:21.230189    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:21.230193    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:21.231719    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:21.731323    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:21.731343    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:21.731353    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:21.731359    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:21.733853    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:22.229962    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:22.229983    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:22.229994    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:22.230000    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:22.231965    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:22.731091    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:22.731103    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:22.731110    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:22.731113    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:22.732655    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:23.230475    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:23.230497    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:23.230508    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:23.230514    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:23.233272    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:23.233342    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:23.731992    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:23.732016    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:23.732028    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:23.732035    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:23.734826    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:24.231187    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:24.231204    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:24.231211    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:24.231215    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:24.232815    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:24.730095    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:24.730115    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:24.730127    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:24.730134    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:24.732229    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:25.230834    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:25.230859    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:25.230894    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:25.230922    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:25.233571    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:25.233644    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:25.730075    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:25.730088    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:25.730094    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:25.730097    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:25.731721    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:26.230661    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:26.230683    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:26.230695    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:26.230700    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:26.233266    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:26.731180    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:26.731196    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:26.731205    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:26.731209    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:26.733186    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:27.229994    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:27.230027    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:27.230039    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:27.230045    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:27.231739    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:27.731285    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:27.731307    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:27.731319    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:27.731326    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:27.733989    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:27.734059    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:28.230970    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:28.230990    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:28.231002    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:28.231008    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:28.233478    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:28.730033    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:28.730046    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:28.730052    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:28.730055    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:28.731388    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:29.231620    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:29.231635    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:29.231641    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:29.231645    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:29.233483    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:29.731128    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:29.731232    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:29.731247    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:29.731255    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:29.733835    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:30.230929    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:30.230943    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:30.230949    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:30.230952    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:30.232466    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:30.232553    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:30.730605    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:30.730626    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:30.730638    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:30.730645    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:30.732830    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:31.230154    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:31.230175    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:31.230185    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:31.230191    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:31.232757    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:31.729899    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:31.729912    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:31.729918    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:31.729923    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:31.731505    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:32.231816    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:32.231839    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:32.231851    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:32.231857    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:32.234335    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:32.234403    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:32.730180    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:32.730195    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:32.730203    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:32.730207    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:32.732186    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:33.230979    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:33.230994    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:33.231000    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:33.231004    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:33.232549    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:33.730529    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:33.730570    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:33.730586    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:33.730598    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:33.732899    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:34.230019    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:34.230034    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:34.230043    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:34.230046    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:34.231696    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:34.731668    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:34.731681    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:34.731687    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:34.731689    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:34.733178    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:34.733297    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:35.230748    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:35.230771    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:35.230829    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:35.230840    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:35.233543    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:35.730516    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:35.730543    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:35.730554    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:35.730562    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:35.733143    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:36.231791    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:36.231803    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:36.231809    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:36.231812    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:36.233357    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:36.732096    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:36.732118    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:36.732129    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:36.732136    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:36.734858    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:36.734937    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:37.230463    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:37.230480    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:37.230488    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:37.230492    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:37.232242    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:37.729996    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:37.730008    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:37.730015    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:37.730019    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:37.731749    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:38.230038    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:38.230053    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:38.230062    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:38.230066    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:38.231617    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:38.730532    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:38.730553    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:38.730565    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:38.730572    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:38.732835    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:39.230748    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:39.230764    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:39.230814    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:39.230817    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:39.232203    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:39.232278    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:39.731075    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:39.731097    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:39.731107    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:39.731113    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:39.733485    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:40.230853    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:40.230875    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:40.230886    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:40.230897    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:40.235609    4270 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I1003 20:27:40.731782    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:40.731795    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:40.731801    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:40.731805    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:40.733185    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:41.232051    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:41.232069    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:41.232077    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:41.232081    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:41.233897    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:41.233961    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:41.730941    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:41.730966    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:41.730977    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:41.730987    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:41.733671    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:42.230536    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:42.230548    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:42.230554    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:42.230557    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:42.232064    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:42.732089    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:42.732114    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:42.732131    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:42.732213    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:42.734736    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:43.230179    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:43.230194    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:43.230202    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:43.230208    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:43.231874    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:43.730709    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:43.730729    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:43.730742    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:43.730757    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:43.733267    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:43.733362    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:44.230231    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:44.230246    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:44.230254    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:44.230261    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:44.232035    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:44.731015    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:44.731036    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:44.731047    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:44.731055    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:44.733620    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:45.230815    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:45.230830    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:45.230836    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:45.230840    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:45.232344    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:45.731514    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:45.731540    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:45.731551    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:45.731558    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:45.733883    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:45.733950    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:46.230180    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:46.230200    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:46.230212    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:46.230219    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:46.232969    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:46.731477    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:46.731491    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:46.731497    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:46.731500    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:46.733073    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:47.231656    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:47.231689    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:47.231806    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:47.231815    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:47.234263    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:47.731185    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:47.731206    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:47.731217    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:47.731223    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:47.733661    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:48.230092    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:48.230107    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:48.230116    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:48.230122    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:48.232237    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:48.232335    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:48.730260    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:48.730281    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:48.730292    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:48.730298    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:48.732857    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:49.231165    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:49.231187    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:49.231199    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:49.231205    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:49.233507    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:49.731529    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:49.731546    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:49.731555    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:49.731561    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:49.733567    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:50.230462    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:50.230483    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:50.230496    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:50.230503    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:50.232422    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:50.232536    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:50.731029    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:50.731051    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:50.731063    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:50.731070    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:50.733372    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:51.230364    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:51.230383    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:51.230394    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:51.230400    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:51.232792    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:51.730647    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:51.730665    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:51.730747    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:51.730754    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:51.732547    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:52.231108    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:52.231119    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:52.231126    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:52.231129    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:52.232495    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:52.731327    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:52.731342    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:52.731352    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:52.731355    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:52.732815    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:52.732928    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:53.230303    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:53.230323    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:53.230334    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:53.230340    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:53.232756    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:53.730401    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:53.730424    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:53.730436    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:53.730441    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:53.732572    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:54.231020    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:54.231043    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:54.231090    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:54.231096    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:54.232872    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:54.731184    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:54.731201    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:54.731209    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:54.731215    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:54.733367    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:54.733439    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:55.232059    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:55.232084    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:55.232095    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:55.232102    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:55.235018    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:55.731959    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:55.732065    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:55.732080    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:55.732088    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:55.734325    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:56.231038    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:56.231063    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:56.231074    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:56.231080    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:56.233764    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:56.730629    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:56.730651    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:56.730662    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:56.730669    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:56.733240    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:57.232128    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:57.232143    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:57.232151    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:57.232155    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:57.234004    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:57.234059    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:27:57.731405    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:57.731428    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:57.731444    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:57.731451    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:57.734025    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:58.231160    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:58.231181    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:58.231193    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:58.231202    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:58.233951    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:58.731852    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:58.731868    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:58.731876    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:58.731881    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:58.733914    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:59.230454    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:59.230476    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:59.230488    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:59.230493    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:59.232700    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:27:59.730209    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:27:59.730220    4270 round_trippers.go:469] Request Headers:
	I1003 20:27:59.730235    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:27:59.730239    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:27:59.731610    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:27:59.731698    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:00.231629    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:00.231646    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:00.231655    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:00.231659    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:00.233535    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:00.731047    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:00.731062    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:00.731070    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:00.731075    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:00.732643    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:01.230312    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:01.230334    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:01.230346    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:01.230354    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:01.232710    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:01.731933    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:01.731951    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:01.731963    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:01.731970    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:01.734188    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:01.734312    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:02.230163    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:02.230178    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:02.230186    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:02.230191    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:02.231819    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:02.731822    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:02.731846    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:02.731857    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:02.731876    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:02.736638    4270 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I1003 20:28:03.231021    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:03.231037    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:03.231046    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:03.231051    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:03.233090    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:03.731734    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:03.731759    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:03.731770    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:03.731777    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:03.734500    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:03.734579    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:04.230818    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:04.230844    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:04.230856    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:04.230861    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:04.233784    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:04.731965    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:04.731985    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:04.731993    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:04.731998    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:04.734247    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:05.230461    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:05.230482    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:05.230544    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:05.230552    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:05.233463    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:05.731550    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:05.731573    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:05.731585    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:05.731594    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:05.734177    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:06.230843    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:06.230862    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:06.230874    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:06.230880    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:06.233469    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:06.233598    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:06.731117    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:06.731143    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:06.731155    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:06.731164    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:06.733849    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:07.232234    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:07.232260    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:07.232362    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:07.232374    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:07.235016    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:07.731196    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:07.731220    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:07.731233    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:07.731240    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:07.733747    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:08.230852    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:08.230864    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:08.230869    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:08.230872    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:08.232375    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:08.730213    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:08.730232    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:08.730239    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:08.730243    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:08.731805    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:08.731861    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:09.231318    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:09.231339    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:09.231347    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:09.231352    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:09.233102    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:09.730994    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:09.731022    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:09.731034    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:09.731043    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:09.733487    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:10.231213    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:10.231229    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:10.231242    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:10.231248    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:10.233344    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:10.731678    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:10.731703    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:10.731715    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:10.731721    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:10.734043    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:10.734136    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:11.230336    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:11.230360    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:11.230418    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:11.230427    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:11.232371    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:11.731366    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:11.731485    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:11.731499    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:11.731505    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:11.733901    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:12.232140    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:12.232160    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:12.232170    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:12.232178    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:12.234677    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:12.730601    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:12.730632    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:12.730735    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:12.730743    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:12.733120    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:13.231747    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:13.231844    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:13.231859    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:13.231866    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:13.234236    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:13.234310    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:13.731053    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:13.731083    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:13.731171    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:13.731179    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:13.733658    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:14.231293    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:14.231316    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:14.231327    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:14.231333    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:14.233708    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:14.731149    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:14.731173    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:14.731184    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:14.731189    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:14.733853    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:15.231400    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:15.231419    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:15.231430    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:15.231434    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:15.233435    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:15.731267    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:15.731293    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:15.731305    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:15.731310    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:15.733483    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:15.733676    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:16.230383    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:16.230409    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:16.230420    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:16.230425    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:16.232993    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:16.731358    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:16.731377    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:16.731385    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:16.731391    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:16.733206    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:17.230318    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:17.230379    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:17.230392    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:17.230413    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:17.232775    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:17.730734    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:17.730760    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:17.730845    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:17.730857    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:17.733442    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:18.232202    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:18.232215    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:18.232270    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:18.232275    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:18.233622    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:18.233688    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:18.730307    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:18.730326    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:18.730335    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:18.730339    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:18.732066    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:19.230953    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:19.230979    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:19.230997    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:19.231003    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:19.233381    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:19.731953    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:19.732059    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:19.732074    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:19.732083    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:19.734581    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:20.231302    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:20.231347    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:20.231354    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:20.231358    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:20.232895    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:20.731115    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:20.731137    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:20.731145    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:20.731150    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:20.732811    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:20.732882    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:21.230901    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:21.230921    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:21.230932    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:21.230938    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:21.233315    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:21.730431    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:21.730456    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:21.730467    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:21.730473    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:21.732810    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:22.230739    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:22.230758    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:22.230769    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:22.230775    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:22.233234    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:22.731115    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:22.731174    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:22.731183    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:22.731188    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:22.732657    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:23.230573    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:23.230648    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:23.230659    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:23.230664    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:23.232453    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:23.232503    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:23.731059    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:23.731085    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:23.731134    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:23.731140    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:23.733404    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:24.230813    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:24.230903    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:24.230915    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:24.230925    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:24.232700    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:24.730435    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:24.730460    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:24.730472    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:24.730477    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:24.732849    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:25.230822    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:25.230838    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:25.230844    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:25.230846    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:25.232429    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:25.731196    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:25.731214    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:25.731222    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:25.731228    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:25.733124    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:25.733183    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:26.230411    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:26.230424    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:26.230430    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:26.230434    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:26.231941    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:26.730872    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:26.730896    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:26.730906    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:26.730912    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:26.732909    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:27.230484    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:27.230499    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:27.230507    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:27.230511    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:27.231978    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:27.730430    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:27.730445    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:27.730454    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:27.730459    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:27.732032    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:28.231280    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:28.231386    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:28.231402    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:28.231410    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:28.234080    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:28.234170    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:28.731318    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:28.731333    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:28.731341    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:28.731345    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:28.733046    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:29.231191    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:29.231213    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:29.231225    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:29.231232    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:29.233649    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:29.731214    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:29.731250    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:29.731263    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:29.731270    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:29.733798    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:30.230924    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:30.230947    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:30.230959    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:30.230967    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:30.233298    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:30.731343    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:30.731355    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:30.731360    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:30.731364    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:30.732597    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:30.732648    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:31.230481    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:31.230493    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:31.230499    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:31.230502    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:31.231629    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:31.731897    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:31.731923    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:31.731935    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:31.731940    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:31.733833    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:32.230895    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:32.230916    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:32.230928    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:32.230935    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:32.233668    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:32.731082    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:32.731097    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:32.731104    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:32.731107    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:32.732409    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:33.230429    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:33.230470    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:33.230481    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:33.230485    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:33.231571    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:33.231626    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:33.731373    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:33.731394    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:33.731403    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:33.731409    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:33.733353    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:34.231638    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:34.231658    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:34.231670    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:34.231676    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:34.233780    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:34.730544    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:34.730563    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:34.730574    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:34.730582    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:34.732762    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:35.230632    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:35.230657    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:35.230669    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:35.230734    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:35.233443    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:35.233496    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:35.731070    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:35.731092    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:35.731102    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:35.731109    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:35.733386    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:36.230611    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:36.230626    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:36.230634    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:36.230639    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:36.232734    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:36.730643    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:36.730659    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:36.730667    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:36.730672    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:36.732126    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:37.230758    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:37.230782    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:37.230795    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:37.230799    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:37.233024    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:37.730561    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:37.730582    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:37.730593    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:37.730598    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:37.733546    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:37.733685    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:38.231401    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:38.231429    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:38.231440    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:38.231447    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:38.233972    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:38.731108    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:38.731129    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:38.731140    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:38.731145    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:38.733968    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:39.230753    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:39.230774    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:39.230785    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:39.230792    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:39.233153    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:39.731045    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:39.731064    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:39.731075    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:39.731082    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:39.733471    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:40.231911    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:40.231935    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:40.231963    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:40.231973    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:40.234788    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:40.234890    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:40.731267    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:40.731278    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:40.731285    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:40.731289    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:40.732572    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:41.230979    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:41.231000    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:41.231013    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:41.231022    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:41.233578    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:41.730635    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:41.730657    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:41.730667    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:41.730676    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:41.732983    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:42.231266    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:42.231293    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:42.231305    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:42.231310    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:42.233954    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:42.731139    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:42.731160    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:42.731172    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:42.731181    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:42.733673    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:42.733740    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:43.230731    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:43.230763    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:43.230844    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:43.230852    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:43.233356    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:43.731906    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:43.731925    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:43.731937    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:43.731957    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:43.734160    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:44.230797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:44.230818    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:44.230829    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:44.230835    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:44.233339    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:44.731420    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:44.731442    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:44.731453    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:44.731458    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:44.733874    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:44.733945    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:45.232160    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:45.232177    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:45.232183    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:45.232186    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:45.233551    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:45.731520    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:45.731541    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:45.731551    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:45.731559    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:45.733935    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:46.232056    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:46.232077    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:46.232088    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:46.232094    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:46.234289    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:46.732550    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:46.732568    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:46.732575    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:46.732577    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:46.733998    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:46.734054    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:47.231967    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:47.231992    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:47.232004    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:47.232011    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:47.233944    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:47.731125    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:47.731250    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:47.731269    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:47.731277    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:47.734424    4270 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1003 20:28:48.230920    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:48.230934    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:48.230942    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:48.230946    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:48.232451    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:48.730710    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:48.730735    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:48.730746    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:48.730753    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:48.733085    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:49.231726    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:49.231747    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:49.231758    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:49.231765    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:49.233994    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:49.234070    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:49.731457    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:49.731475    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:49.731484    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:49.731488    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:49.733083    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:50.231771    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:50.231888    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:50.231905    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:50.231912    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:50.234472    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:50.730599    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:50.730611    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:50.730617    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:50.730620    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:50.732174    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:51.230675    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:51.230696    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:51.230730    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:51.230738    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:51.232700    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:51.731472    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:51.731495    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:51.731508    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:51.731514    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:51.734186    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:51.734259    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:52.231627    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:52.231648    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:52.231659    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:52.231666    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:52.234237    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:52.732421    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:52.732438    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:52.732445    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:52.732450    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:52.733983    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:53.230612    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:53.230624    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:53.230630    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:53.230633    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:53.232301    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:53.730935    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:53.730955    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:53.730963    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:53.730967    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:53.732912    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:54.230940    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:54.230952    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:54.230957    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:54.230961    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:54.232173    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:54.232229    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:54.730904    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:54.730917    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:54.730923    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:54.730926    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:54.732232    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:55.231866    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:55.231891    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:55.231903    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:55.231911    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:55.234452    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:55.731976    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:55.732053    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:55.732061    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:55.732065    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:55.735887    4270 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1003 20:28:56.232277    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:56.232306    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:56.232396    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:56.232403    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:56.234657    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:56.234898    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:56.731011    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:56.731037    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:56.731048    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:56.731054    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:56.733184    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:57.231182    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:57.231198    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:57.231206    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:57.231211    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:57.233236    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:57.731224    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:57.731245    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:57.731256    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:57.731264    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:57.733575    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:58.230777    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:58.230799    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:58.230811    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:58.230816    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:58.232892    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:58.732260    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:58.732281    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:58.732351    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:58.732356    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:58.734189    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:28:58.734262    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:28:59.230777    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:59.230798    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:59.230809    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:59.230818    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:59.233270    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:28:59.730769    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:28:59.730794    4270 round_trippers.go:469] Request Headers:
	I1003 20:28:59.730805    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:28:59.730812    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:28:59.733269    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:00.230780    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:00.230792    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:00.230798    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:00.230801    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:00.232347    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:00.730983    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:00.731014    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:00.731051    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:00.731056    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:00.733531    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:01.231231    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:01.231251    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:01.231262    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:01.231271    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:01.233620    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:01.233685    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:01.730741    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:01.730754    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:01.730761    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:01.730764    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:01.732092    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:02.231342    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:02.231360    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:02.231368    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:02.231373    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:02.233308    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:02.731349    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:02.731369    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:02.731380    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:02.731386    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:02.733943    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:03.230925    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:03.230941    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:03.230949    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:03.230953    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:03.232555    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:03.731783    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:03.731803    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:03.731815    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:03.731820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:03.733884    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:03.733952    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:04.231451    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:04.231471    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:04.231483    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:04.231490    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:04.233498    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:04.730710    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:04.730725    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:04.730731    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:04.730734    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:04.732337    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:05.232079    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:05.232104    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:05.232115    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:05.232122    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:05.234825    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:05.730852    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:05.730874    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:05.730886    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:05.730892    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:05.732855    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:06.231083    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:06.231099    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:06.231105    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:06.231112    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:06.232672    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:06.232727    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:06.731912    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:06.731934    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:06.731945    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:06.731951    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:06.734414    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:07.230816    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:07.230837    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:07.230849    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:07.230854    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:07.233117    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:07.730805    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:07.730820    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:07.730827    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:07.730830    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:07.732338    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:08.232752    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:08.232777    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:08.232788    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:08.232794    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:08.235181    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:08.235252    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:08.730928    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:08.730950    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:08.730961    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:08.730966    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:08.733170    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:09.231099    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:09.231115    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:09.231124    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:09.231128    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:09.232725    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:09.732391    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:09.732412    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:09.732424    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:09.732429    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:09.734645    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:10.232673    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:10.232701    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:10.232713    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:10.232721    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:10.235208    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:10.235378    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:10.730963    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:10.730977    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:10.730983    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:10.730987    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:10.732345    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:11.231530    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:11.231548    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:11.231559    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:11.231564    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:11.233791    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:11.731795    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:11.731815    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:11.731826    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:11.731833    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:11.733761    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:12.232069    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:12.232084    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:12.232090    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:12.232093    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:12.233585    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:12.730942    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:12.730980    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:12.730992    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:12.731001    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:12.733327    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:12.733412    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:13.232938    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:13.232959    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:13.232970    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:13.232977    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:13.235517    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:13.731079    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:13.731094    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:13.731146    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:13.731150    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:13.732808    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:14.232059    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:14.232084    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:14.232170    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:14.232180    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:14.234488    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:14.732858    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:14.732882    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:14.732897    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:14.732903    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:14.735359    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:14.735450    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:15.231453    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:15.231466    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:15.231472    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:15.231475    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:15.232825    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:15.731303    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:15.731324    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:15.731335    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:15.731341    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:15.733754    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:16.231076    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:16.231090    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:16.231101    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:16.231106    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:16.232957    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:16.731184    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:16.731243    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:16.731251    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:16.731255    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:16.732558    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:17.232420    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:17.232441    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:17.232453    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:17.232462    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:17.234767    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:17.234873    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:17.732236    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:17.732270    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:17.732367    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:17.732377    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:17.735229    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:18.230844    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:18.230857    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:18.230864    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:18.230867    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:18.232284    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:18.731757    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:18.731769    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:18.731775    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:18.731778    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:18.733283    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:19.231733    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:19.231752    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:19.231764    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:19.231771    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:19.234177    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:19.731169    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:19.731185    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:19.731191    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:19.731193    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:19.732566    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:19.732658    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:20.232245    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:20.232263    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:20.232275    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:20.232282    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:20.234604    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:20.732132    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:20.732252    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:20.732273    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:20.732286    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:20.734901    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:21.232569    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:21.232581    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:21.232587    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:21.232590    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:21.234142    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:21.731219    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:21.731240    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:21.731252    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:21.731259    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:21.733757    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:21.733832    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:22.231041    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:22.231066    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:22.231076    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:22.231111    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:22.233419    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:22.731332    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:22.731344    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:22.731350    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:22.731354    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:22.732904    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:23.232974    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:23.233001    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:23.233012    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:23.233017    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:23.235653    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:23.730978    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:23.730997    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:23.731008    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:23.731015    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:23.733190    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:24.232855    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:24.232868    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:24.232874    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:24.232878    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:24.234459    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:24.234519    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:24.730982    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:24.731009    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:24.731020    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:24.731026    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:24.733422    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:25.231159    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:25.231185    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:25.231196    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:25.231201    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:25.233737    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:25.731040    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:25.731109    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:25.731116    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:25.731119    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:25.732594    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:26.230940    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:26.230959    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:26.230969    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:26.230975    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:26.233201    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:26.732741    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:26.732761    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:26.732772    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:26.732778    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:26.735271    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:26.735370    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:27.231342    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:27.231356    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:27.231362    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:27.231366    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:27.232831    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:27.730975    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:27.730987    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:27.730993    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:27.730996    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:27.732308    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:28.231146    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:28.231167    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:28.231176    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:28.231181    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:28.233741    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:28.731640    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:28.731652    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:28.731658    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:28.731661    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:28.733200    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:29.231660    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:29.231679    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:29.231691    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:29.231700    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:29.233837    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:29.233905    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:29.732428    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:29.732450    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:29.732461    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:29.732468    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:29.735440    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:30.231407    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:30.231420    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:30.231426    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:30.231429    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:30.233000    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:30.731988    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:30.732007    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:30.732019    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:30.732025    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:30.734368    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:31.232235    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:31.232257    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:31.232270    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:31.232276    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:31.234773    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:31.234845    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:31.730993    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:31.731006    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:31.731015    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:31.731019    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:31.732672    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:32.231907    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:32.231927    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:32.231938    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:32.231947    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:32.234466    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:32.732132    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:32.732147    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:32.732154    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:32.732157    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:32.733755    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:33.232855    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:33.232872    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:33.232878    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:33.232882    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:33.234474    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:33.730987    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:33.730999    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:33.731005    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:33.731008    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:33.732372    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:33.732430    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:34.232187    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:34.232208    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:34.232221    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:34.232228    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:34.234721    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:34.732413    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:34.732426    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:34.732433    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:34.732435    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:34.733871    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:35.233179    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:35.233205    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:35.233223    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:35.233232    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:35.236144    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:35.731203    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:35.731218    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:35.731226    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:35.731232    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:35.733061    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:35.733139    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:36.232308    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:36.232320    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:36.232326    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:36.232328    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:36.233828    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:36.731403    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:36.731427    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:36.731440    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:36.731448    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:36.733668    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:37.231295    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:37.231323    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:37.231339    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:37.231347    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:37.233710    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:37.731098    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:37.731116    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:37.731122    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:37.731125    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:37.732581    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:38.232050    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:38.232069    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:38.232081    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:38.232086    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:38.234634    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:38.234702    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:38.732502    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:38.732525    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:38.732538    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:38.732543    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:38.735067    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:39.231797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:39.231809    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:39.231816    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:39.231820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:39.233353    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:39.731085    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:39.731108    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:39.731124    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:39.731130    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:39.733512    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:40.231228    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:40.231248    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:40.231259    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:40.231264    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:40.233329    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:40.731261    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:40.731273    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:40.731278    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:40.731281    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:40.733003    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:40.733064    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:41.231562    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:41.231582    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:41.231594    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:41.231602    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:41.234257    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:41.731439    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:41.731454    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:41.731462    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:41.731466    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:41.733033    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:42.230981    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:42.230994    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:42.231000    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:42.231003    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:42.232664    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:42.731843    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:42.731865    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:42.731875    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:42.731882    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:42.734555    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:42.734627    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:43.231596    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:43.231633    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:43.231644    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:43.231650    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:43.234401    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:43.732503    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:43.732517    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:43.732523    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:43.732526    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:43.734011    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:44.231049    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:44.231061    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:44.231067    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:44.231070    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:44.232509    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:44.731839    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:44.731865    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:44.731876    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:44.731882    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:44.734356    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:45.232122    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:45.232184    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:45.232192    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:45.232197    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:45.233762    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:45.233819    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:45.732467    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:45.732489    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:45.732502    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:45.732510    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:45.735038    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:46.231799    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:46.231817    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:46.231828    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:46.231834    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:46.234151    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:46.731222    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:46.731236    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:46.731242    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:46.731245    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:46.732711    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:47.232794    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:47.232820    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:47.232832    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:47.232838    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:47.235457    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:47.235530    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:47.732257    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:47.732285    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:47.732379    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:47.732388    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:47.734817    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:48.232601    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:48.232617    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:48.232624    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:48.232628    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:48.234187    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:48.731963    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:48.731984    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:48.731995    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:48.732000    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:48.734340    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:49.232352    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:49.232374    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:49.232387    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:49.232394    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:49.234725    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:49.732314    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:49.732328    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:49.732335    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:49.732339    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:49.733918    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:49.733980    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:50.233012    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:50.233036    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:50.233047    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:50.233053    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:50.235525    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:50.732659    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:50.732681    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:50.732692    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:50.732698    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:50.735518    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:51.231717    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:51.231729    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:51.231735    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:51.231738    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:51.233104    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:51.731221    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:51.731241    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:51.731253    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:51.731260    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:51.733464    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:52.231989    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:52.232011    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:52.232022    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:52.232031    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:52.234515    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:52.234586    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:52.731881    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:52.731895    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:52.731901    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:52.731904    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:52.733454    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:53.231369    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:53.231382    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:53.231388    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:53.231391    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:53.232441    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:53.732420    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:53.732449    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:53.732460    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:53.732466    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:53.734828    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:54.231849    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:54.231865    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:54.231871    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:54.231875    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:54.233191    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:54.731254    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:54.731273    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:54.731284    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:54.731291    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:54.733381    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:54.733479    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:55.231684    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:55.231705    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:55.231716    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:55.231722    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:55.234592    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:55.732726    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:55.732739    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:55.732745    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:55.732748    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:55.734256    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:56.231371    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:56.231393    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:56.231405    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:56.231410    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:56.234104    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:56.731934    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:56.732036    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:56.732052    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:56.732060    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:56.734421    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:56.734546    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:57.231183    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:57.231201    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:57.231208    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:57.231212    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:57.232834    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:57.732404    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:57.732430    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:57.732443    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:57.732528    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:57.734955    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:58.231265    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:58.231290    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:58.231300    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:58.231305    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:58.233722    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:58.731216    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:58.731229    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:58.731235    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:58.731238    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:58.732627    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:29:59.231315    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:59.231337    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:59.231349    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:59.231357    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:59.233913    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:29:59.233987    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:29:59.731316    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:29:59.731335    4270 round_trippers.go:469] Request Headers:
	I1003 20:29:59.731345    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:29:59.731352    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:29:59.733764    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:00.232360    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:00.232406    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:00.232415    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:00.232419    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:00.233807    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:00.731168    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:00.731180    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:00.731186    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:00.731192    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:00.732706    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:01.231260    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:01.231276    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:01.231284    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:01.231289    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:01.233056    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:01.731351    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:01.731364    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:01.731374    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:01.731377    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:01.732849    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:01.732907    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:02.231445    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:02.231462    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:02.231492    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:02.231497    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:02.233220    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:02.731713    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:02.731725    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:02.731731    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:02.731734    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:02.733096    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:03.231532    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:03.231549    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:03.231555    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:03.231559    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:03.233082    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:03.732221    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:03.732241    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:03.732252    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:03.732260    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:03.734278    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:03.734348    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:04.231676    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:04.231697    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:04.231709    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:04.231714    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:04.233998    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:04.731277    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:04.731290    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:04.731297    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:04.731301    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:04.732636    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:05.231279    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:05.231299    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:05.231310    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:05.231315    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:05.233681    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:05.732855    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:05.732875    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:05.732886    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:05.732891    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:05.735097    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:05.735227    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:06.232341    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:06.232356    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:06.232362    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:06.232365    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:06.233922    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:06.732354    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:06.732374    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:06.732385    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:06.732391    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:06.734877    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:07.231642    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:07.231662    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:07.231675    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:07.231684    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:07.233679    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:07.731221    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:07.731238    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:07.731244    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:07.731248    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:07.732910    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:08.232549    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:08.232569    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:08.232581    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:08.232588    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:08.235147    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:08.235213    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:08.732677    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:08.732703    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:08.732715    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:08.732723    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:08.734966    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:09.231742    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:09.231755    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:09.231761    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:09.231764    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:09.233275    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:09.731634    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:09.731646    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:09.731671    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:09.731675    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:09.733484    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:10.232914    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:10.232939    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:10.232950    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:10.232956    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:10.235117    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:10.732065    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:10.732082    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:10.732088    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:10.732092    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:10.733684    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:10.733740    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:11.231497    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:11.231509    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:11.231515    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:11.231519    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:11.233054    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:11.732509    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:11.732528    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:11.732539    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:11.732546    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:11.734853    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:12.231284    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:12.231297    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:12.231303    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:12.231306    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:12.233522    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:12.732288    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:12.732390    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:12.732406    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:12.732414    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:12.734761    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:12.734831    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:13.232453    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:13.232474    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:13.232485    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:13.232493    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:13.235527    4270 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1003 20:30:13.732804    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:13.732817    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:13.732823    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:13.732827    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:13.734370    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:14.231569    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:14.231589    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:14.231601    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:14.231608    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:14.233874    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:14.731348    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:14.731361    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:14.731367    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:14.731370    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:14.732703    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:15.233237    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:15.233250    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:15.233256    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:15.233258    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:15.234979    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:15.235062    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:15.732200    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:15.732222    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:15.732233    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:15.732240    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:15.734760    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:16.232713    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:16.232735    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:16.232744    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:16.232751    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:16.235276    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:16.731528    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:16.731541    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:16.731548    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:16.731551    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:16.733221    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:17.231417    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:17.231439    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:17.231451    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:17.231458    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:17.233735    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:17.732196    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:17.732227    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:17.732240    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:17.732248    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:17.734816    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:17.734942    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:18.232574    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:18.232587    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:18.232592    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:18.232596    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:18.234081    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:18.731956    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:18.731974    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:18.731986    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:18.731992    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:18.734435    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:19.231584    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:19.231605    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:19.231616    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:19.231623    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:19.234161    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:19.731340    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:19.731356    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:19.731365    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:19.731368    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:19.732924    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:20.232309    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:20.232323    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:20.232331    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:20.232336    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:20.234320    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:20.234372    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:20.732202    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:20.732224    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:20.732235    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:20.732241    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:20.734603    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:21.231291    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:21.231305    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:21.231311    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:21.231314    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:21.232972    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:21.731665    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:21.731771    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:21.731785    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:21.731791    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:21.734075    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:22.232102    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:22.232127    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:22.232137    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:22.232145    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:22.234617    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:22.234786    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:22.733045    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:22.733060    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:22.733066    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:22.733069    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:22.734499    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:23.232708    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:23.232726    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:23.232791    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:23.232798    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:23.234508    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:23.731687    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:23.731711    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:23.731723    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:23.731730    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:23.733917    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:24.232545    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:24.232558    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:24.232563    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:24.232567    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:24.234104    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:24.732327    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:24.732353    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:24.732366    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:24.732414    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:24.734473    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:24.734552    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:25.231489    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:25.231508    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:25.231520    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:25.231526    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:25.234054    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:25.732706    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:25.732727    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:25.732737    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:25.732741    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:25.734816    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:26.232105    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:26.232120    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:26.232128    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:26.232134    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:26.234096    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:26.731549    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:26.731575    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:26.731585    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:26.731593    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:26.734160    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:27.231481    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:27.231494    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:27.231500    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:27.231502    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:27.233073    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:27.233130    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:27.731554    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:27.731573    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:27.731585    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:27.731593    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:27.733697    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:28.232056    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:28.232074    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:28.232086    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:28.232091    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:28.234260    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:28.733366    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:28.733381    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:28.733388    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:28.733391    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:28.734759    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:29.232519    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:29.232540    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:29.232552    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:29.232558    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:29.235089    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:29.235160    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:29.731604    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:29.731623    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:29.731634    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:29.731641    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:29.733732    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:30.231656    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:30.231670    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:30.231676    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:30.231679    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:30.233220    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:30.732671    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:30.732695    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:30.732707    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:30.732714    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:30.735190    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:31.231836    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:31.231858    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:31.231871    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:31.231876    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:31.234421    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:31.732128    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:31.732142    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:31.732148    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:31.732150    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:31.733530    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:31.733583    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:32.232724    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:32.232746    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:32.232758    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:32.232767    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:32.235331    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:32.731529    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:32.731573    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:32.731583    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:32.731589    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:32.733363    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:33.233398    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:33.233411    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:33.233417    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:33.233421    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:33.234928    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:33.732506    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:33.732607    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:33.732622    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:33.732629    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:33.734767    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:33.734834    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:34.232600    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:34.232622    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:34.232635    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:34.232646    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:34.235484    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:34.732074    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:34.732090    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:34.732097    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:34.732101    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:34.733578    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:35.233627    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:35.233649    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:35.233661    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:35.233667    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:35.236124    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:35.732809    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:35.732835    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:35.732846    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:35.732853    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:35.735373    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:35.735546    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
	I1003 20:30:36.231546    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:36.231562    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:36.231568    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:36.231571    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:36.233173    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:36.731923    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:36.731944    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:36.731957    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:36.731973    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:36.734508    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1003 20:30:37.231652    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:37.231667    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:37.231674    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:37.231678    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:37.233677    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:37.732340    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
	I1003 20:30:37.732353    4270 round_trippers.go:469] Request Headers:
	I1003 20:30:37.732360    4270 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:30:37.732363    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:30:37.733876    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1003 20:30:37.733941    4270 node_ready.go:38] duration metric: took 4m0.002595913s for node "ha-214000-m02" to be "Ready" ...
	I1003 20:30:37.756809    4270 out.go:201] 
	W1003 20:30:37.778376    4270 out.go:270] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W1003 20:30:37.778389    4270 out.go:270] * 
	* 
	W1003 20:30:37.780960    4270 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:30:37.802240    4270 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1003 20:26:16.094310    4270 out.go:345] Setting OutFile to fd 1 ...
I1003 20:26:16.094668    4270 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:26:16.094674    4270 out.go:358] Setting ErrFile to fd 2...
I1003 20:26:16.094678    4270 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:26:16.094867    4270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
I1003 20:26:16.095200    4270 mustload.go:65] Loading cluster: ha-214000
I1003 20:26:16.095542    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:26:16.095898    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:26:16.095936    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:26:16.106603    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51337
I1003 20:26:16.106967    4270 main.go:141] libmachine: () Calling .GetVersion
I1003 20:26:16.107385    4270 main.go:141] libmachine: Using API Version  1
I1003 20:26:16.107397    4270 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:26:16.107610    4270 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:26:16.107744    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
I1003 20:26:16.107879    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:26:16.107910    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
I1003 20:26:16.109020    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 3812 missing from process table
W1003 20:26:16.109044    4270 host.go:58] "ha-214000-m02" host status: Stopped
I1003 20:26:16.130540    4270 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
I1003 20:26:16.151383    4270 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1003 20:26:16.151444    4270 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
I1003 20:26:16.151465    4270 cache.go:56] Caching tarball of preloaded images
I1003 20:26:16.151669    4270 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1003 20:26:16.151683    4270 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1003 20:26:16.151812    4270 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
I1003 20:26:16.152412    4270 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1003 20:26:16.152540    4270 start.go:364] duration metric: took 78.245µs to acquireMachinesLock for "ha-214000-m02"
I1003 20:26:16.152556    4270 start.go:96] Skipping create...Using existing machine configuration
I1003 20:26:16.152568    4270 fix.go:54] fixHost starting: m02
I1003 20:26:16.152804    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:26:16.152822    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:26:16.163733    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51339
I1003 20:26:16.164093    4270 main.go:141] libmachine: () Calling .GetVersion
I1003 20:26:16.164468    4270 main.go:141] libmachine: Using API Version  1
I1003 20:26:16.164494    4270 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:26:16.164731    4270 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:26:16.164844    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:16.164952    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
I1003 20:26:16.165034    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:26:16.165104    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
I1003 20:26:16.166191    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 3812 missing from process table
I1003 20:26:16.166215    4270 fix.go:112] recreateIfNeeded on ha-214000-m02: state=Stopped err=<nil>
I1003 20:26:16.166231    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
W1003 20:26:16.166320    4270 fix.go:138] unexpected machine state, will restart: <nil>
I1003 20:26:16.187605    4270 out.go:177] * Restarting existing hyperkit VM for "ha-214000-m02" ...
I1003 20:26:16.208141    4270 main.go:141] libmachine: (ha-214000-m02) Calling .Start
I1003 20:26:16.208435    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:26:16.208487    4270 main.go:141] libmachine: (ha-214000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
I1003 20:26:16.210280    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 3812 missing from process table
I1003 20:26:16.210292    4270 main.go:141] libmachine: (ha-214000-m02) DBG | pid 3812 is in state "Stopped"
I1003 20:26:16.210310    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid...
I1003 20:26:16.210589    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
I1003 20:26:16.236202    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
I1003 20:26:16.236228    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
I1003 20:26:16.236381    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041c8a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I1003 20:26:16.236410    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041c8a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I1003 20:26:16.236468    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines
/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
I1003 20:26:16.236503    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=tt
yS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
I1003 20:26:16.236514    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I1003 20:26:16.237892    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 DEBUG: hyperkit: Pid is 4274
I1003 20:26:16.238320    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
I1003 20:26:16.238347    4270 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:26:16.238441    4270 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
I1003 20:26:16.240120    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
I1003 20:26:16.240221    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Found 6 entries in /var/db/dhcpd_leases!
I1003 20:26:16.240234    4270 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
I1003 20:26:16.240249    4270 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
I1003 20:26:16.240262    4270 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
I1003 20:26:16.240273    4270 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
I1003 20:26:16.240317    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
I1003 20:26:16.241056    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
I1003 20:26:16.241242    4270 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
I1003 20:26:16.241677    4270 machine.go:93] provisionDockerMachine start ...
I1003 20:26:16.241687    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:16.241796    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:16.241906    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:16.242018    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:16.242125    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:16.242223    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:16.242361    4270 main.go:141] libmachine: Using SSH client type: native
I1003 20:26:16.242543    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1003 20:26:16.242552    4270 main.go:141] libmachine: About to run SSH command:
hostname
I1003 20:26:16.248317    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
I1003 20:26:16.257128    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I1003 20:26:16.258139    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I1003 20:26:16.258158    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I1003 20:26:16.258186    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I1003 20:26:16.258200    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I1003 20:26:16.642174    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I1003 20:26:16.642193    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I1003 20:26:16.756948    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I1003 20:26:16.756964    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I1003 20:26:16.756982    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I1003 20:26:16.756993    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I1003 20:26:16.757857    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I1003 20:26:16.757868    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I1003 20:26:22.342445    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I1003 20:26:22.342460    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I1003 20:26:22.342468    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I1003 20:26:22.366499    4270 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:26:22 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I1003 20:26:29.405110    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I1003 20:26:29.405123    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
I1003 20:26:29.405266    4270 buildroot.go:166] provisioning hostname "ha-214000-m02"
I1003 20:26:29.405278    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
I1003 20:26:29.405375    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:29.405453    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:29.405538    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.405631    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.405706    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:29.405863    4270 main.go:141] libmachine: Using SSH client type: native
I1003 20:26:29.405996    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1003 20:26:29.406004    4270 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
I1003 20:26:29.471653    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02

                                                
                                                
I1003 20:26:29.471672    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:29.471810    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:29.471914    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.472009    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.472088    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:29.472223    4270 main.go:141] libmachine: Using SSH client type: native
I1003 20:26:29.472372    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1003 20:26:29.472383    4270 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I1003 20:26:29.535149    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I1003 20:26:29.535169    4270 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
I1003 20:26:29.535190    4270 buildroot.go:174] setting up certificates
I1003 20:26:29.535203    4270 provision.go:84] configureAuth start
I1003 20:26:29.535213    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
I1003 20:26:29.535354    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
I1003 20:26:29.535428    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:29.535517    4270 provision.go:143] copyHostCerts
I1003 20:26:29.535547    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
I1003 20:26:29.535630    4270 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
I1003 20:26:29.535638    4270 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
I1003 20:26:29.535781    4270 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
I1003 20:26:29.535997    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
I1003 20:26:29.536053    4270 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
I1003 20:26:29.536058    4270 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
I1003 20:26:29.536145    4270 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
I1003 20:26:29.536321    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
I1003 20:26:29.536370    4270 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
I1003 20:26:29.536375    4270 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
I1003 20:26:29.536460    4270 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
I1003 20:26:29.536624    4270 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
I1003 20:26:29.693826    4270 provision.go:177] copyRemoteCerts
I1003 20:26:29.693897    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1003 20:26:29.693917    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:29.694071    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:29.694173    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.694271    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:29.694363    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
I1003 20:26:29.727948    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
I1003 20:26:29.728021    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1003 20:26:29.747586    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1003 20:26:29.747651    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1003 20:26:29.766954    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1003 20:26:29.767024    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1003 20:26:29.786607    4270 provision.go:87] duration metric: took 251.382216ms to configureAuth
I1003 20:26:29.786621    4270 buildroot.go:189] setting minikube options for container-runtime
I1003 20:26:29.786796    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:26:29.786809    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:29.786957    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:29.787053    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:29.787166    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.787250    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.787336    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:29.787462    4270 main.go:141] libmachine: Using SSH client type: native
I1003 20:26:29.787583    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1003 20:26:29.787591    4270 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1003 20:26:29.840605    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I1003 20:26:29.840617    4270 buildroot.go:70] root file system type: tmpfs
I1003 20:26:29.840708    4270 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1003 20:26:29.840724    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:29.840854    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:29.840946    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.841023    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.841115    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:29.841259    4270 main.go:141] libmachine: Using SSH client type: native
I1003 20:26:29.841399    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1003 20:26:29.841445    4270 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1003 20:26:29.905639    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I1003 20:26:29.905660    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:29.905801    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:29.905895    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.905998    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:29.906109    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:29.906261    4270 main.go:141] libmachine: Using SSH client type: native
I1003 20:26:29.906395    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1003 20:26:29.906407    4270 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1003 20:26:31.455013    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I1003 20:26:31.455027    4270 machine.go:96] duration metric: took 15.213214447s to provisionDockerMachine
I1003 20:26:31.455041    4270 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
I1003 20:26:31.455051    4270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1003 20:26:31.455061    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:31.455256    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1003 20:26:31.455268    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:31.455362    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:31.455457    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:31.455546    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:31.455639    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
I1003 20:26:31.493036    4270 ssh_runner.go:195] Run: cat /etc/os-release
I1003 20:26:31.496275    4270 info.go:137] Remote host: Buildroot 2023.02.9
I1003 20:26:31.496291    4270 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
I1003 20:26:31.496452    4270 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
I1003 20:26:31.496697    4270 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
I1003 20:26:31.496705    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
I1003 20:26:31.497002    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1003 20:26:31.511416    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
I1003 20:26:31.545413    4270 start.go:296] duration metric: took 90.36222ms for postStartSetup
I1003 20:26:31.545436    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:31.545649    4270 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I1003 20:26:31.545663    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:31.545749    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:31.545844    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:31.545924    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:31.546030    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
I1003 20:26:31.580346    4270 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
I1003 20:26:31.580423    4270 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I1003 20:26:31.634178    4270 fix.go:56] duration metric: took 15.481476216s for fixHost
I1003 20:26:31.634211    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:31.634478    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:31.634677    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:31.634868    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:31.635053    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:31.635309    4270 main.go:141] libmachine: Using SSH client type: native
I1003 20:26:31.635552    4270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc028d00] 0xc02b9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1003 20:26:31.635566    4270 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1003 20:26:31.691103    4270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012391.795698782

                                                
                                                
I1003 20:26:31.691114    4270 fix.go:216] guest clock: 1728012391.795698782
I1003 20:26:31.691120    4270 fix.go:229] Guest: 2024-10-03 20:26:31.795698782 -0700 PDT Remote: 2024-10-03 20:26:31.634197 -0700 PDT m=+15.578796913 (delta=161.501782ms)
I1003 20:26:31.691144    4270 fix.go:200] guest clock delta is within tolerance: 161.501782ms
I1003 20:26:31.691148    4270 start.go:83] releasing machines lock for "ha-214000-m02", held for 15.538470572s
I1003 20:26:31.691164    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:31.691307    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
I1003 20:26:31.691414    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:31.691697    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:31.691819    4270 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
I1003 20:26:31.691915    4270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1003 20:26:31.691950    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:31.691967    4270 ssh_runner.go:195] Run: systemctl --version
I1003 20:26:31.692007    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
I1003 20:26:31.692041    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:31.692117    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:31.692136    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
I1003 20:26:31.692227    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:31.692239    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
I1003 20:26:31.692321    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
I1003 20:26:31.692331    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
I1003 20:26:31.692403    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
I1003 20:26:31.722616    4270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1003 20:26:31.770640    4270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1003 20:26:31.770856    4270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1003 20:26:31.784455    4270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1003 20:26:31.784468    4270 start.go:495] detecting cgroup driver to use...
I1003 20:26:31.784584    4270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1003 20:26:31.799412    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1003 20:26:31.807498    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1003 20:26:31.815748    4270 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1003 20:26:31.815817    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1003 20:26:31.824099    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1003 20:26:31.832286    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1003 20:26:31.840305    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1003 20:26:31.848917    4270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1003 20:26:31.857359    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1003 20:26:31.865510    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1003 20:26:31.873481    4270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1003 20:26:31.881641    4270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1003 20:26:31.888965    4270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:

                                                
                                                
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1003 20:26:31.889015    4270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1003 20:26:31.898246    4270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1003 20:26:31.906827    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 20:26:32.002621    4270 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1003 20:26:32.021402    4270 start.go:495] detecting cgroup driver to use...
I1003 20:26:32.021505    4270 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1003 20:26:32.036526    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1003 20:26:32.047548    4270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1003 20:26:32.060233    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1003 20:26:32.070902    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1003 20:26:32.081290    4270 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1003 20:26:32.104056    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1003 20:26:32.114459    4270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1003 20:26:32.130649    4270 ssh_runner.go:195] Run: which cri-dockerd
I1003 20:26:32.133790    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1003 20:26:32.141893    4270 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I1003 20:26:32.156581    4270 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1003 20:26:32.260622    4270 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1003 20:26:32.356193    4270 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I1003 20:26:32.356275    4270 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1003 20:26:32.370080    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 20:26:32.473479    4270 ssh_runner.go:195] Run: sudo systemctl restart docker
I1003 20:26:34.731450    4270 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.257926757s)
I1003 20:26:34.731528    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1003 20:26:34.741867    4270 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I1003 20:26:34.754869    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1003 20:26:34.765445    4270 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1003 20:26:34.857373    4270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1003 20:26:34.958922    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 20:26:35.075092    4270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1003 20:26:35.088788    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1003 20:26:35.099967    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 20:26:35.198267    4270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1003 20:26:35.264191    4270 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1003 20:26:35.264293    4270 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1003 20:26:35.268593    4270 start.go:563] Will wait 60s for crictl version
I1003 20:26:35.268665    4270 ssh_runner.go:195] Run: which crictl
I1003 20:26:35.271866    4270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1003 20:26:35.303430    4270 start.go:579] Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  27.3.1
RuntimeApiVersion:  v1
I1003 20:26:35.303513    4270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1003 20:26:35.319641    4270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1003 20:26:35.361206    4270 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I1003 20:26:35.361254    4270 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
I1003 20:26:35.361691    4270 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
I1003 20:26:35.366232    4270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1003 20:26:35.376610    4270 mustload.go:65] Loading cluster: ha-214000
I1003 20:26:35.376777    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:26:35.377015    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:26:35.377037    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:26:35.388167    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51361
I1003 20:26:35.388477    4270 main.go:141] libmachine: () Calling .GetVersion
I1003 20:26:35.388816    4270 main.go:141] libmachine: Using API Version  1
I1003 20:26:35.388829    4270 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:26:35.389022    4270 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:26:35.389130    4270 main.go:141] libmachine: (ha-214000) Calling .GetState
I1003 20:26:35.389218    4270 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:26:35.389288    4270 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
I1003 20:26:35.390336    4270 host.go:66] Checking if "ha-214000" exists ...
I1003 20:26:35.390606    4270 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:26:35.390631    4270 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:26:35.401450    4270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51363
I1003 20:26:35.401756    4270 main.go:141] libmachine: () Calling .GetVersion
I1003 20:26:35.402115    4270 main.go:141] libmachine: Using API Version  1
I1003 20:26:35.402132    4270 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:26:35.402374    4270 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:26:35.402486    4270 main.go:141] libmachine: (ha-214000) Calling .DriverName
I1003 20:26:35.402615    4270 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.6
I1003 20:26:35.402624    4270 certs.go:194] generating shared ca certs ...
I1003 20:26:35.402633    4270 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 20:26:35.402822    4270 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
I1003 20:26:35.402919    4270 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
I1003 20:26:35.402930    4270 certs.go:256] generating profile certs ...
I1003 20:26:35.403037    4270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
I1003 20:26:35.403057    4270 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920
I1003 20:26:35.403073    4270 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
I1003 20:26:35.527462    4270 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920 ...
I1003 20:26:35.527478    4270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920: {Name:mkef92f8fe64da5a52183f4691a9cd9072b32341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 20:26:35.527967    4270 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920 ...
I1003 20:26:35.527978    4270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920: {Name:mk4c4c7f0464367a22cee2cfb82e667e918da285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 20:26:35.528255    4270 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.d9ffa920 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
I1003 20:26:35.528475    4270 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.d9ffa920 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
I1003 20:26:35.528757    4270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
I1003 20:26:35.528766    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1003 20:26:35.528789    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1003 20:26:35.528807    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1003 20:26:35.528825    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1003 20:26:35.528846    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1003 20:26:35.528865    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1003 20:26:35.528884    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1003 20:26:35.528903    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1003 20:26:35.529003    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
W1003 20:26:35.529066    4270 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
I1003 20:26:35.529075    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
I1003 20:26:35.529104    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
I1003 20:26:35.529138    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
I1003 20:26:35.529170    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
I1003 20:26:35.529237    4270 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
I1003 20:26:35.529268    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1003 20:26:35.529288    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
I1003 20:26:35.529312    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
I1003 20:26:35.529344    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
I1003 20:26:35.529492    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
I1003 20:26:35.529581    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
I1003 20:26:35.529678    4270 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
I1003 20:26:35.529770    4270 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
I1003 20:26:35.556752    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I1003 20:26:35.562386    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I1003 20:26:35.578464    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I1003 20:26:35.581589    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I1003 20:26:35.590755    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I1003 20:26:35.594139    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I1003 20:26:35.602981    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I1003 20:26:35.605966    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I1003 20:26:35.614900    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I1003 20:26:35.618117    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I1003 20:26:35.626220    4270 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I1003 20:26:35.629435    4270 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I1003 20:26:35.639509    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1003 20:26:35.658820    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1003 20:26:35.677757    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1003 20:26:35.696831    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1003 20:26:35.715849    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1003 20:26:35.735011    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1003 20:26:35.753741    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1003 20:26:35.772611    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1003 20:26:35.792089    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1003 20:26:35.812473    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
I1003 20:26:35.833117    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
I1003 20:26:35.853541    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I1003 20:26:35.867510    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I1003 20:26:35.881097    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I1003 20:26:35.894618    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I1003 20:26:35.908200    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I1003 20:26:35.921614    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I1003 20:26:35.934945    4270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I1003 20:26:35.948907    4270 ssh_runner.go:195] Run: openssl version
I1003 20:26:35.953247    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1003 20:26:35.962348    4270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1003 20:26:35.965726    4270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
I1003 20:26:35.965789    4270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1003 20:26:35.970003    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1003 20:26:35.979554    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
I1003 20:26:35.988693    4270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
I1003 20:26:35.992058    4270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
I1003 20:26:35.992110    4270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
I1003 20:26:35.996436    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
I1003 20:26:36.005810    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
I1003 20:26:36.015391    4270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
I1003 20:26:36.018784    4270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
I1003 20:26:36.018843    4270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
I1003 20:26:36.023085    4270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
I1003 20:26:36.032401    4270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1003 20:26:36.035490    4270 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1003 20:26:36.035539    4270 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
I1003 20:26:36.035623    4270 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1003 20:26:36.035646    4270 kube-vip.go:115] generating kube-vip config ...
I1003 20:26:36.035697    4270 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I1003 20:26:36.049338    4270 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I1003 20:26:36.049427    4270 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.169.0.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.3
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I1003 20:26:36.049500    4270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I1003 20:26:36.057830    4270 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
stdout:

                                                
                                                
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory

                                                
                                                
Initiating transfer...
I1003 20:26:36.057889    4270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I1003 20:26:36.066184    4270 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I1003 20:26:36.066194    4270 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I1003 20:26:36.066205    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
I1003 20:26:36.066200    4270 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I1003 20:26:36.066206    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
I1003 20:26:36.066268    4270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1003 20:26:36.066330    4270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
I1003 20:26:36.066330    4270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
I1003 20:26:36.077835    4270 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
I1003 20:26:36.077858    4270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
I1003 20:26:36.077878    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I1003 20:26:36.078372    4270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
I1003 20:26:36.078403    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I1003 20:26:36.078417    4270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
I1003 20:26:36.107533    4270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
I1003 20:26:36.107569    4270 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I1003 20:26:36.701744    4270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I1003 20:26:36.709850    4270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
I1003 20:26:36.723416    4270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1003 20:26:36.736902    4270 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
I1003 20:26:36.750478    4270 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
I1003 20:26:36.753607    4270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1003 20:26:36.763824    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 20:26:36.875416    4270 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1003 20:26:36.891236    4270 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1003 20:26:36.891256    4270 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1003 20:26:36.891405    4270 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:26:36.912351    4270 out.go:177] * Verifying Kubernetes components...
I1003 20:26:36.932843    4270 out.go:177] * Enabled addons: 
I1003 20:26:36.954024    4270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 20:26:36.975027    4270 addons.go:510] duration metric: took 83.78169ms for enable addons: enabled=[]
I1003 20:26:37.052530    4270 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1003 20:26:37.728591    4270 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
I1003 20:26:37.728804    4270 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd6fef60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W1003 20:26:37.728865    4270 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
I1003 20:26:37.729158    4270 cert_rotation.go:140] Starting client certificate rotation controller
I1003 20:26:37.729320    4270 node_ready.go:35] waiting up to 6m0s for node "ha-214000-m02" to be "Ready" ...
I1003 20:26:37.729392    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:37.729398    4270 round_trippers.go:469] Request Headers:
I1003 20:26:37.729405    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:37.729409    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:37.735188    4270 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
I1003 20:26:38.229501    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:38.229512    4270 round_trippers.go:469] Request Headers:
I1003 20:26:38.229518    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:38.229521    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:38.230915    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:38.729810    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:38.729826    4270 round_trippers.go:469] Request Headers:
I1003 20:26:38.729834    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:38.729838    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:38.731495    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:39.230689    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:39.230709    4270 round_trippers.go:469] Request Headers:
I1003 20:26:39.230720    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:39.230726    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:39.233204    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:39.730574    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:39.730596    4270 round_trippers.go:469] Request Headers:
I1003 20:26:39.730608    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:39.730616    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:39.732918    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:39.733123    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:40.230826    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:40.230852    4270 round_trippers.go:469] Request Headers:
I1003 20:26:40.230863    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:40.230871    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:40.233059    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:40.729994    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:40.730008    4270 round_trippers.go:469] Request Headers:
I1003 20:26:40.730014    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:40.730018    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:40.731770    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:41.229790    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:41.229806    4270 round_trippers.go:469] Request Headers:
I1003 20:26:41.229814    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:41.229820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:41.231953    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:41.730731    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:41.730757    4270 round_trippers.go:469] Request Headers:
I1003 20:26:41.730768    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:41.730775    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:41.733260    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:41.733407    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:42.230381    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:42.230395    4270 round_trippers.go:469] Request Headers:
I1003 20:26:42.230414    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:42.230418    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:42.231874    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:42.730977    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:42.731004    4270 round_trippers.go:469] Request Headers:
I1003 20:26:42.731018    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:42.731027    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:42.733610    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:43.229884    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:43.229899    4270 round_trippers.go:469] Request Headers:
I1003 20:26:43.229907    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:43.229914    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:43.231455    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:43.729598    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:43.729610    4270 round_trippers.go:469] Request Headers:
I1003 20:26:43.729616    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:43.729620    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:43.731133    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:44.230143    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:44.230164    4270 round_trippers.go:469] Request Headers:
I1003 20:26:44.230175    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:44.230181    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:44.232520    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:44.232585    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:44.729595    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:44.729608    4270 round_trippers.go:469] Request Headers:
I1003 20:26:44.729614    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:44.729617    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:44.731057    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:45.229799    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:45.229813    4270 round_trippers.go:469] Request Headers:
I1003 20:26:45.229820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:45.229823    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:45.231452    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:45.730563    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:45.730586    4270 round_trippers.go:469] Request Headers:
I1003 20:26:45.730599    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:45.730605    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:45.733174    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:46.230900    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:46.230920    4270 round_trippers.go:469] Request Headers:
I1003 20:26:46.230928    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:46.230932    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:46.232859    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:46.232911    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:46.729981    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:46.729997    4270 round_trippers.go:469] Request Headers:
I1003 20:26:46.730009    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:46.730014    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:46.731833    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:47.230142    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:47.230243    4270 round_trippers.go:469] Request Headers:
I1003 20:26:47.230256    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:47.230265    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:47.232737    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:47.729897    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:47.729917    4270 round_trippers.go:469] Request Headers:
I1003 20:26:47.729929    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:47.729937    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:47.732658    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:48.230797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:48.230813    4270 round_trippers.go:469] Request Headers:
I1003 20:26:48.230819    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:48.230823    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:48.232357    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:48.731106    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:48.731136    4270 round_trippers.go:469] Request Headers:
I1003 20:26:48.731228    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:48.731237    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:48.734167    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:48.734230    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:49.231020    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:49.231041    4270 round_trippers.go:469] Request Headers:
I1003 20:26:49.231053    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:49.231060    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:49.233486    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:49.730466    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:49.730481    4270 round_trippers.go:469] Request Headers:
I1003 20:26:49.730487    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:49.730491    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:49.732043    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:50.229891    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:50.229908    4270 round_trippers.go:469] Request Headers:
I1003 20:26:50.229916    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:50.229920    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:50.231601    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:50.731736    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:50.731753    4270 round_trippers.go:469] Request Headers:
I1003 20:26:50.731762    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:50.731768    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:50.733485    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:51.231102    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:51.231125    4270 round_trippers.go:469] Request Headers:
I1003 20:26:51.231137    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:51.231144    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:51.233710    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:51.233778    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:51.730797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:51.730892    4270 round_trippers.go:469] Request Headers:
I1003 20:26:51.730907    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:51.730915    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:51.733363    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:52.229704    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:52.229724    4270 round_trippers.go:469] Request Headers:
I1003 20:26:52.229735    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:52.229741    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:52.231527    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:52.729679    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:52.729692    4270 round_trippers.go:469] Request Headers:
I1003 20:26:52.729699    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:52.729702    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:52.731098    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:53.229721    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:53.229741    4270 round_trippers.go:469] Request Headers:
I1003 20:26:53.229753    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:53.229758    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:53.232514    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:53.729864    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:53.729891    4270 round_trippers.go:469] Request Headers:
I1003 20:26:53.729907    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:53.729918    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:53.732775    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:53.732840    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:54.229652    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:54.229663    4270 round_trippers.go:469] Request Headers:
I1003 20:26:54.229669    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:54.229673    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:54.231091    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:54.730145    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:54.730181    4270 round_trippers.go:469] Request Headers:
I1003 20:26:54.730194    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:54.730200    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:54.732644    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:55.230203    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:55.230214    4270 round_trippers.go:469] Request Headers:
I1003 20:26:55.230220    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:55.230223    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:55.231621    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:55.729957    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:55.729970    4270 round_trippers.go:469] Request Headers:
I1003 20:26:55.729977    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:55.729980    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:55.731485    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:56.230391    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:56.230420    4270 round_trippers.go:469] Request Headers:
I1003 20:26:56.230431    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:56.230440    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:56.232906    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:56.232988    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:56.729669    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:56.729684    4270 round_trippers.go:469] Request Headers:
I1003 20:26:56.729692    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:56.729703    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:56.731657    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:57.230520    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:57.230532    4270 round_trippers.go:469] Request Headers:
I1003 20:26:57.230538    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:57.230542    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:57.232069    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:57.730357    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:57.730377    4270 round_trippers.go:469] Request Headers:
I1003 20:26:57.730388    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:57.730394    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:57.732591    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:58.230113    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:58.230134    4270 round_trippers.go:469] Request Headers:
I1003 20:26:58.230146    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:58.230154    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:58.232354    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:58.730635    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:58.730648    4270 round_trippers.go:469] Request Headers:
I1003 20:26:58.730654    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:58.730658    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:58.732142    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:26:58.732200    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:26:59.229775    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:59.229795    4270 round_trippers.go:469] Request Headers:
I1003 20:26:59.229806    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:59.229814    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:59.232176    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:26:59.730545    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:26:59.730556    4270 round_trippers.go:469] Request Headers:
I1003 20:26:59.730562    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:26:59.730565    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:26:59.732195    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:00.229703    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:00.229715    4270 round_trippers.go:469] Request Headers:
I1003 20:27:00.229721    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:00.229725    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:00.231144    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:00.729911    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:00.729928    4270 round_trippers.go:469] Request Headers:
I1003 20:27:00.729935    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:00.729939    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:00.731284    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:01.229662    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:01.229675    4270 round_trippers.go:469] Request Headers:
I1003 20:27:01.229681    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:01.229685    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:01.231067    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:01.231119    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:01.730405    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:01.730444    4270 round_trippers.go:469] Request Headers:
I1003 20:27:01.730452    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:01.730456    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:01.731842    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:02.230644    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:02.230665    4270 round_trippers.go:469] Request Headers:
I1003 20:27:02.230677    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:02.230684    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:02.232944    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:02.731756    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:02.731778    4270 round_trippers.go:469] Request Headers:
I1003 20:27:02.731789    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:02.731796    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:02.734359    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:03.230151    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:03.230164    4270 round_trippers.go:469] Request Headers:
I1003 20:27:03.230171    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:03.230174    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:03.231713    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:03.231785    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:03.729690    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:03.729701    4270 round_trippers.go:469] Request Headers:
I1003 20:27:03.729708    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:03.729711    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:03.731073    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:04.230224    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:04.230302    4270 round_trippers.go:469] Request Headers:
I1003 20:27:04.230312    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:04.230316    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:04.231980    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:04.730178    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:04.730191    4270 round_trippers.go:469] Request Headers:
I1003 20:27:04.730198    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:04.730200    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:04.731639    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:05.229681    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:05.229696    4270 round_trippers.go:469] Request Headers:
I1003 20:27:05.229702    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:05.229705    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:05.233821    4270 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I1003 20:27:05.233881    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:05.729989    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:05.730010    4270 round_trippers.go:469] Request Headers:
I1003 20:27:05.730020    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:05.730040    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:05.732563    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:06.230186    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:06.230198    4270 round_trippers.go:469] Request Headers:
I1003 20:27:06.230204    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:06.230207    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:06.231674    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:06.730933    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:06.730953    4270 round_trippers.go:469] Request Headers:
I1003 20:27:06.730965    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:06.730973    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:06.733274    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:07.231334    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:07.231355    4270 round_trippers.go:469] Request Headers:
I1003 20:27:07.231367    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:07.231373    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:07.234197    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:07.234350    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:07.730806    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:07.730819    4270 round_trippers.go:469] Request Headers:
I1003 20:27:07.730825    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:07.730829    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:07.732251    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:08.230982    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:08.231004    4270 round_trippers.go:469] Request Headers:
I1003 20:27:08.231016    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:08.231024    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:08.233611    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:08.729778    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:08.729791    4270 round_trippers.go:469] Request Headers:
I1003 20:27:08.729797    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:08.729801    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:08.731423    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:09.231035    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:09.231051    4270 round_trippers.go:469] Request Headers:
I1003 20:27:09.231057    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:09.231060    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:09.232644    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:09.729860    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:09.729880    4270 round_trippers.go:469] Request Headers:
I1003 20:27:09.729892    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:09.729898    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:09.732460    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:09.732526    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:10.229997    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:10.230017    4270 round_trippers.go:469] Request Headers:
I1003 20:27:10.230028    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:10.230037    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:10.232949    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:10.729791    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:10.729806    4270 round_trippers.go:469] Request Headers:
I1003 20:27:10.729813    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:10.729816    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:10.731329    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:11.230366    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:11.230380    4270 round_trippers.go:469] Request Headers:
I1003 20:27:11.230386    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:11.230390    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:11.231641    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:11.730767    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:11.730779    4270 round_trippers.go:469] Request Headers:
I1003 20:27:11.730784    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:11.730788    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:11.732440    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:12.230076    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:12.230093    4270 round_trippers.go:469] Request Headers:
I1003 20:27:12.230101    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:12.230104    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:12.231926    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:12.231994    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:12.729816    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:12.729830    4270 round_trippers.go:469] Request Headers:
I1003 20:27:12.729838    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:12.729842    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:12.731964    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:13.230596    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:13.230617    4270 round_trippers.go:469] Request Headers:
I1003 20:27:13.230630    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:13.230635    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:13.233311    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:13.730210    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:13.730273    4270 round_trippers.go:469] Request Headers:
I1003 20:27:13.730280    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:13.730285    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:13.731780    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:14.231160    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:14.231182    4270 round_trippers.go:469] Request Headers:
I1003 20:27:14.231242    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:14.231249    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:14.233160    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:14.233233    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:14.731199    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:14.731220    4270 round_trippers.go:469] Request Headers:
I1003 20:27:14.731232    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:14.731239    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:14.733896    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:15.230714    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:15.230733    4270 round_trippers.go:469] Request Headers:
I1003 20:27:15.230782    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:15.230785    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:15.232184    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:15.730178    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:15.730201    4270 round_trippers.go:469] Request Headers:
I1003 20:27:15.730216    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:15.730224    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:15.733014    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:16.229931    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:16.229953    4270 round_trippers.go:469] Request Headers:
I1003 20:27:16.229965    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:16.229971    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:16.232553    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:16.730921    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:16.730932    4270 round_trippers.go:469] Request Headers:
I1003 20:27:16.730939    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:16.730941    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:16.732632    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:16.732703    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:17.230513    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:17.230535    4270 round_trippers.go:469] Request Headers:
I1003 20:27:17.230547    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:17.230553    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:17.233138    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:17.730170    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:17.730195    4270 round_trippers.go:469] Request Headers:
I1003 20:27:17.730325    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:17.730340    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:17.732193    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:18.229835    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:18.229847    4270 round_trippers.go:469] Request Headers:
I1003 20:27:18.229854    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:18.229857    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:18.231266    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:18.730255    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:18.730366    4270 round_trippers.go:469] Request Headers:
I1003 20:27:18.730383    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:18.730390    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:18.732799    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:18.732877    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:19.231087    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:19.231114    4270 round_trippers.go:469] Request Headers:
I1003 20:27:19.231126    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:19.231134    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:19.233790    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:19.730038    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:19.730051    4270 round_trippers.go:469] Request Headers:
I1003 20:27:19.730057    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:19.730060    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:19.731563    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:20.230237    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:20.230345    4270 round_trippers.go:469] Request Headers:
I1003 20:27:20.230356    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:20.230361    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:20.232283    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:20.730974    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:20.730995    4270 round_trippers.go:469] Request Headers:
I1003 20:27:20.731006    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:20.731012    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:20.733607    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:20.733689    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:21.230170    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:21.230183    4270 round_trippers.go:469] Request Headers:
I1003 20:27:21.230189    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:21.230193    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:21.231719    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:21.731323    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:21.731343    4270 round_trippers.go:469] Request Headers:
I1003 20:27:21.731353    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:21.731359    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:21.733853    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:22.229962    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:22.229983    4270 round_trippers.go:469] Request Headers:
I1003 20:27:22.229994    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:22.230000    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:22.231965    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:22.731091    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:22.731103    4270 round_trippers.go:469] Request Headers:
I1003 20:27:22.731110    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:22.731113    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:22.732655    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:23.230475    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:23.230497    4270 round_trippers.go:469] Request Headers:
I1003 20:27:23.230508    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:23.230514    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:23.233272    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:23.233342    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:23.731992    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:23.732016    4270 round_trippers.go:469] Request Headers:
I1003 20:27:23.732028    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:23.732035    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:23.734826    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:24.231187    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:24.231204    4270 round_trippers.go:469] Request Headers:
I1003 20:27:24.231211    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:24.231215    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:24.232815    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:24.730095    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:24.730115    4270 round_trippers.go:469] Request Headers:
I1003 20:27:24.730127    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:24.730134    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:24.732229    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:25.230834    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:25.230859    4270 round_trippers.go:469] Request Headers:
I1003 20:27:25.230894    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:25.230922    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:25.233571    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:25.233644    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:25.730075    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:25.730088    4270 round_trippers.go:469] Request Headers:
I1003 20:27:25.730094    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:25.730097    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:25.731721    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:26.230661    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:26.230683    4270 round_trippers.go:469] Request Headers:
I1003 20:27:26.230695    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:26.230700    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:26.233266    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:26.731180    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:26.731196    4270 round_trippers.go:469] Request Headers:
I1003 20:27:26.731205    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:26.731209    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:26.733186    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:27.229994    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:27.230027    4270 round_trippers.go:469] Request Headers:
I1003 20:27:27.230039    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:27.230045    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:27.231739    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:27.731285    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:27.731307    4270 round_trippers.go:469] Request Headers:
I1003 20:27:27.731319    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:27.731326    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:27.733989    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:27.734059    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:28.230970    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:28.230990    4270 round_trippers.go:469] Request Headers:
I1003 20:27:28.231002    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:28.231008    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:28.233478    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:28.730033    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:28.730046    4270 round_trippers.go:469] Request Headers:
I1003 20:27:28.730052    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:28.730055    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:28.731388    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:29.231620    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:29.231635    4270 round_trippers.go:469] Request Headers:
I1003 20:27:29.231641    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:29.231645    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:29.233483    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:29.731128    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:29.731232    4270 round_trippers.go:469] Request Headers:
I1003 20:27:29.731247    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:29.731255    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:29.733835    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:30.230929    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:30.230943    4270 round_trippers.go:469] Request Headers:
I1003 20:27:30.230949    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:30.230952    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:30.232466    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:30.232553    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:30.730605    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:30.730626    4270 round_trippers.go:469] Request Headers:
I1003 20:27:30.730638    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:30.730645    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:30.732830    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:31.230154    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:31.230175    4270 round_trippers.go:469] Request Headers:
I1003 20:27:31.230185    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:31.230191    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:31.232757    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:31.729899    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:31.729912    4270 round_trippers.go:469] Request Headers:
I1003 20:27:31.729918    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:31.729923    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:31.731505    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:32.231816    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:32.231839    4270 round_trippers.go:469] Request Headers:
I1003 20:27:32.231851    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:32.231857    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:32.234335    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:32.234403    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:32.730180    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:32.730195    4270 round_trippers.go:469] Request Headers:
I1003 20:27:32.730203    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:32.730207    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:32.732186    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:33.230979    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:33.230994    4270 round_trippers.go:469] Request Headers:
I1003 20:27:33.231000    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:33.231004    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:33.232549    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:33.730529    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:33.730570    4270 round_trippers.go:469] Request Headers:
I1003 20:27:33.730586    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:33.730598    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:33.732899    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:34.230019    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:34.230034    4270 round_trippers.go:469] Request Headers:
I1003 20:27:34.230043    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:34.230046    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:34.231696    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:34.731668    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:34.731681    4270 round_trippers.go:469] Request Headers:
I1003 20:27:34.731687    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:34.731689    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:34.733178    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:34.733297    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:35.230748    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:35.230771    4270 round_trippers.go:469] Request Headers:
I1003 20:27:35.230829    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:35.230840    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:35.233543    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:35.730516    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:35.730543    4270 round_trippers.go:469] Request Headers:
I1003 20:27:35.730554    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:35.730562    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:35.733143    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:36.231791    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:36.231803    4270 round_trippers.go:469] Request Headers:
I1003 20:27:36.231809    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:36.231812    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:36.233357    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:36.732096    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:36.732118    4270 round_trippers.go:469] Request Headers:
I1003 20:27:36.732129    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:36.732136    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:36.734858    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:36.734937    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:37.230463    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:37.230480    4270 round_trippers.go:469] Request Headers:
I1003 20:27:37.230488    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:37.230492    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:37.232242    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:37.729996    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:37.730008    4270 round_trippers.go:469] Request Headers:
I1003 20:27:37.730015    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:37.730019    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:37.731749    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:38.230038    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:38.230053    4270 round_trippers.go:469] Request Headers:
I1003 20:27:38.230062    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:38.230066    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:38.231617    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:38.730532    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:38.730553    4270 round_trippers.go:469] Request Headers:
I1003 20:27:38.730565    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:38.730572    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:38.732835    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:39.230748    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:39.230764    4270 round_trippers.go:469] Request Headers:
I1003 20:27:39.230814    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:39.230817    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:39.232203    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:39.232278    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:39.731075    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:39.731097    4270 round_trippers.go:469] Request Headers:
I1003 20:27:39.731107    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:39.731113    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:39.733485    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:40.230853    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:40.230875    4270 round_trippers.go:469] Request Headers:
I1003 20:27:40.230886    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:40.230897    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:40.235609    4270 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I1003 20:27:40.731782    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:40.731795    4270 round_trippers.go:469] Request Headers:
I1003 20:27:40.731801    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:40.731805    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:40.733185    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:41.232051    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:41.232069    4270 round_trippers.go:469] Request Headers:
I1003 20:27:41.232077    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:41.232081    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:41.233897    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:41.233961    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:41.730941    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:41.730966    4270 round_trippers.go:469] Request Headers:
I1003 20:27:41.730977    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:41.730987    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:41.733671    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:42.230536    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:42.230548    4270 round_trippers.go:469] Request Headers:
I1003 20:27:42.230554    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:42.230557    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:42.232064    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:42.732089    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:42.732114    4270 round_trippers.go:469] Request Headers:
I1003 20:27:42.732131    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:42.732213    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:42.734736    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:43.230179    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:43.230194    4270 round_trippers.go:469] Request Headers:
I1003 20:27:43.230202    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:43.230208    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:43.231874    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:43.730709    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:43.730729    4270 round_trippers.go:469] Request Headers:
I1003 20:27:43.730742    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:43.730757    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:43.733267    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:43.733362    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:44.230231    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:44.230246    4270 round_trippers.go:469] Request Headers:
I1003 20:27:44.230254    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:44.230261    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:44.232035    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:44.731015    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:44.731036    4270 round_trippers.go:469] Request Headers:
I1003 20:27:44.731047    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:44.731055    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:44.733620    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:45.230815    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:45.230830    4270 round_trippers.go:469] Request Headers:
I1003 20:27:45.230836    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:45.230840    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:45.232344    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:45.731514    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:45.731540    4270 round_trippers.go:469] Request Headers:
I1003 20:27:45.731551    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:45.731558    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:45.733883    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:45.733950    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:46.230180    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:46.230200    4270 round_trippers.go:469] Request Headers:
I1003 20:27:46.230212    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:46.230219    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:46.232969    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:46.731477    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:46.731491    4270 round_trippers.go:469] Request Headers:
I1003 20:27:46.731497    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:46.731500    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:46.733073    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:47.231656    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:47.231689    4270 round_trippers.go:469] Request Headers:
I1003 20:27:47.231806    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:47.231815    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:47.234263    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:47.731185    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:47.731206    4270 round_trippers.go:469] Request Headers:
I1003 20:27:47.731217    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:47.731223    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:47.733661    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:48.230092    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:48.230107    4270 round_trippers.go:469] Request Headers:
I1003 20:27:48.230116    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:48.230122    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:48.232237    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:48.232335    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:48.730260    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:48.730281    4270 round_trippers.go:469] Request Headers:
I1003 20:27:48.730292    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:48.730298    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:48.732857    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:49.231165    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:49.231187    4270 round_trippers.go:469] Request Headers:
I1003 20:27:49.231199    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:49.231205    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:49.233507    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:49.731529    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:49.731546    4270 round_trippers.go:469] Request Headers:
I1003 20:27:49.731555    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:49.731561    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:49.733567    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:50.230462    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:50.230483    4270 round_trippers.go:469] Request Headers:
I1003 20:27:50.230496    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:50.230503    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:50.232422    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:50.232536    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:50.731029    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:50.731051    4270 round_trippers.go:469] Request Headers:
I1003 20:27:50.731063    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:50.731070    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:50.733372    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:51.230364    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:51.230383    4270 round_trippers.go:469] Request Headers:
I1003 20:27:51.230394    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:51.230400    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:51.232792    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:51.730647    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:51.730665    4270 round_trippers.go:469] Request Headers:
I1003 20:27:51.730747    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:51.730754    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:51.732547    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:52.231108    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:52.231119    4270 round_trippers.go:469] Request Headers:
I1003 20:27:52.231126    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:52.231129    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:52.232495    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:52.731327    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:52.731342    4270 round_trippers.go:469] Request Headers:
I1003 20:27:52.731352    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:52.731355    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:52.732815    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:52.732928    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:53.230303    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:53.230323    4270 round_trippers.go:469] Request Headers:
I1003 20:27:53.230334    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:53.230340    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:53.232756    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:53.730401    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:53.730424    4270 round_trippers.go:469] Request Headers:
I1003 20:27:53.730436    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:53.730441    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:53.732572    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:54.231020    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:54.231043    4270 round_trippers.go:469] Request Headers:
I1003 20:27:54.231090    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:54.231096    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:54.232872    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:54.731184    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:54.731201    4270 round_trippers.go:469] Request Headers:
I1003 20:27:54.731209    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:54.731215    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:54.733367    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:54.733439    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:55.232059    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:55.232084    4270 round_trippers.go:469] Request Headers:
I1003 20:27:55.232095    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:55.232102    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:55.235018    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:55.731959    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:55.732065    4270 round_trippers.go:469] Request Headers:
I1003 20:27:55.732080    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:55.732088    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:55.734325    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:56.231038    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:56.231063    4270 round_trippers.go:469] Request Headers:
I1003 20:27:56.231074    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:56.231080    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:56.233764    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:56.730629    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:56.730651    4270 round_trippers.go:469] Request Headers:
I1003 20:27:56.730662    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:56.730669    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:56.733240    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:57.232128    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:57.232143    4270 round_trippers.go:469] Request Headers:
I1003 20:27:57.232151    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:57.232155    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:57.234004    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:57.234059    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:27:57.731405    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:57.731428    4270 round_trippers.go:469] Request Headers:
I1003 20:27:57.731444    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:57.731451    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:57.734025    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:58.231160    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:58.231181    4270 round_trippers.go:469] Request Headers:
I1003 20:27:58.231193    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:58.231202    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:58.233951    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:58.731852    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:58.731868    4270 round_trippers.go:469] Request Headers:
I1003 20:27:58.731876    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:58.731881    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:58.733914    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:59.230454    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:59.230476    4270 round_trippers.go:469] Request Headers:
I1003 20:27:59.230488    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:59.230493    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:59.232700    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:27:59.730209    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:27:59.730220    4270 round_trippers.go:469] Request Headers:
I1003 20:27:59.730235    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:27:59.730239    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:27:59.731610    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:27:59.731698    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:00.231629    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:00.231646    4270 round_trippers.go:469] Request Headers:
I1003 20:28:00.231655    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:00.231659    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:00.233535    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:00.731047    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:00.731062    4270 round_trippers.go:469] Request Headers:
I1003 20:28:00.731070    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:00.731075    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:00.732643    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:01.230312    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:01.230334    4270 round_trippers.go:469] Request Headers:
I1003 20:28:01.230346    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:01.230354    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:01.232710    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:01.731933    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:01.731951    4270 round_trippers.go:469] Request Headers:
I1003 20:28:01.731963    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:01.731970    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:01.734188    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:01.734312    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:02.230163    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:02.230178    4270 round_trippers.go:469] Request Headers:
I1003 20:28:02.230186    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:02.230191    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:02.231819    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:02.731822    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:02.731846    4270 round_trippers.go:469] Request Headers:
I1003 20:28:02.731857    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:02.731876    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:02.736638    4270 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I1003 20:28:03.231021    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:03.231037    4270 round_trippers.go:469] Request Headers:
I1003 20:28:03.231046    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:03.231051    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:03.233090    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:03.731734    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:03.731759    4270 round_trippers.go:469] Request Headers:
I1003 20:28:03.731770    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:03.731777    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:03.734500    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:03.734579    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:04.230818    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:04.230844    4270 round_trippers.go:469] Request Headers:
I1003 20:28:04.230856    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:04.230861    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:04.233784    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:04.731965    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:04.731985    4270 round_trippers.go:469] Request Headers:
I1003 20:28:04.731993    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:04.731998    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:04.734247    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:05.230461    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:05.230482    4270 round_trippers.go:469] Request Headers:
I1003 20:28:05.230544    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:05.230552    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:05.233463    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:05.731550    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:05.731573    4270 round_trippers.go:469] Request Headers:
I1003 20:28:05.731585    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:05.731594    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:05.734177    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:06.230843    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:06.230862    4270 round_trippers.go:469] Request Headers:
I1003 20:28:06.230874    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:06.230880    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:06.233469    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:06.233598    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:06.731117    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:06.731143    4270 round_trippers.go:469] Request Headers:
I1003 20:28:06.731155    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:06.731164    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:06.733849    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:07.232234    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:07.232260    4270 round_trippers.go:469] Request Headers:
I1003 20:28:07.232362    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:07.232374    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:07.235016    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:07.731196    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:07.731220    4270 round_trippers.go:469] Request Headers:
I1003 20:28:07.731233    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:07.731240    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:07.733747    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:08.230852    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:08.230864    4270 round_trippers.go:469] Request Headers:
I1003 20:28:08.230869    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:08.230872    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:08.232375    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:08.730213    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:08.730232    4270 round_trippers.go:469] Request Headers:
I1003 20:28:08.730239    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:08.730243    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:08.731805    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:08.731861    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:09.231318    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:09.231339    4270 round_trippers.go:469] Request Headers:
I1003 20:28:09.231347    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:09.231352    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:09.233102    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:09.730994    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:09.731022    4270 round_trippers.go:469] Request Headers:
I1003 20:28:09.731034    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:09.731043    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:09.733487    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:10.231213    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:10.231229    4270 round_trippers.go:469] Request Headers:
I1003 20:28:10.231242    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:10.231248    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:10.233344    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:10.731678    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:10.731703    4270 round_trippers.go:469] Request Headers:
I1003 20:28:10.731715    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:10.731721    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:10.734043    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:10.734136    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:11.230336    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:11.230360    4270 round_trippers.go:469] Request Headers:
I1003 20:28:11.230418    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:11.230427    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:11.232371    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:11.731366    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:11.731485    4270 round_trippers.go:469] Request Headers:
I1003 20:28:11.731499    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:11.731505    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:11.733901    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:12.232140    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:12.232160    4270 round_trippers.go:469] Request Headers:
I1003 20:28:12.232170    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:12.232178    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:12.234677    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:12.730601    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:12.730632    4270 round_trippers.go:469] Request Headers:
I1003 20:28:12.730735    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:12.730743    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:12.733120    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:13.231747    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:13.231844    4270 round_trippers.go:469] Request Headers:
I1003 20:28:13.231859    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:13.231866    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:13.234236    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:13.234310    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:13.731053    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:13.731083    4270 round_trippers.go:469] Request Headers:
I1003 20:28:13.731171    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:13.731179    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:13.733658    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:14.231293    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:14.231316    4270 round_trippers.go:469] Request Headers:
I1003 20:28:14.231327    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:14.231333    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:14.233708    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:14.731149    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:14.731173    4270 round_trippers.go:469] Request Headers:
I1003 20:28:14.731184    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:14.731189    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:14.733853    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:15.231400    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:15.231419    4270 round_trippers.go:469] Request Headers:
I1003 20:28:15.231430    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:15.231434    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:15.233435    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:15.731267    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:15.731293    4270 round_trippers.go:469] Request Headers:
I1003 20:28:15.731305    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:15.731310    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:15.733483    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:15.733676    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:16.230383    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:16.230409    4270 round_trippers.go:469] Request Headers:
I1003 20:28:16.230420    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:16.230425    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:16.232993    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:16.731358    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:16.731377    4270 round_trippers.go:469] Request Headers:
I1003 20:28:16.731385    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:16.731391    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:16.733206    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:17.230318    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:17.230379    4270 round_trippers.go:469] Request Headers:
I1003 20:28:17.230392    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:17.230413    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:17.232775    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:17.730734    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:17.730760    4270 round_trippers.go:469] Request Headers:
I1003 20:28:17.730845    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:17.730857    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:17.733442    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:18.232202    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:18.232215    4270 round_trippers.go:469] Request Headers:
I1003 20:28:18.232270    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:18.232275    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:18.233622    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:18.233688    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:18.730307    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:18.730326    4270 round_trippers.go:469] Request Headers:
I1003 20:28:18.730335    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:18.730339    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:18.732066    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:19.230953    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:19.230979    4270 round_trippers.go:469] Request Headers:
I1003 20:28:19.230997    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:19.231003    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:19.233381    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:19.731953    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:19.732059    4270 round_trippers.go:469] Request Headers:
I1003 20:28:19.732074    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:19.732083    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:19.734581    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:20.231302    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:20.231347    4270 round_trippers.go:469] Request Headers:
I1003 20:28:20.231354    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:20.231358    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:20.232895    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:20.731115    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:20.731137    4270 round_trippers.go:469] Request Headers:
I1003 20:28:20.731145    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:20.731150    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:20.732811    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:20.732882    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:21.230901    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:21.230921    4270 round_trippers.go:469] Request Headers:
I1003 20:28:21.230932    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:21.230938    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:21.233315    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:21.730431    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:21.730456    4270 round_trippers.go:469] Request Headers:
I1003 20:28:21.730467    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:21.730473    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:21.732810    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:22.230739    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:22.230758    4270 round_trippers.go:469] Request Headers:
I1003 20:28:22.230769    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:22.230775    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:22.233234    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:22.731115    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:22.731174    4270 round_trippers.go:469] Request Headers:
I1003 20:28:22.731183    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:22.731188    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:22.732657    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:23.230573    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:23.230648    4270 round_trippers.go:469] Request Headers:
I1003 20:28:23.230659    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:23.230664    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:23.232453    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:23.232503    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:23.731059    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:23.731085    4270 round_trippers.go:469] Request Headers:
I1003 20:28:23.731134    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:23.731140    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:23.733404    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:24.230813    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:24.230903    4270 round_trippers.go:469] Request Headers:
I1003 20:28:24.230915    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:24.230925    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:24.232700    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:24.730435    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:24.730460    4270 round_trippers.go:469] Request Headers:
I1003 20:28:24.730472    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:24.730477    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:24.732849    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:25.230822    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:25.230838    4270 round_trippers.go:469] Request Headers:
I1003 20:28:25.230844    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:25.230846    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:25.232429    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:25.731196    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:25.731214    4270 round_trippers.go:469] Request Headers:
I1003 20:28:25.731222    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:25.731228    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:25.733124    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:25.733183    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:26.230411    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:26.230424    4270 round_trippers.go:469] Request Headers:
I1003 20:28:26.230430    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:26.230434    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:26.231941    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:26.730872    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:26.730896    4270 round_trippers.go:469] Request Headers:
I1003 20:28:26.730906    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:26.730912    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:26.732909    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:27.230484    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:27.230499    4270 round_trippers.go:469] Request Headers:
I1003 20:28:27.230507    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:27.230511    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:27.231978    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:27.730430    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:27.730445    4270 round_trippers.go:469] Request Headers:
I1003 20:28:27.730454    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:27.730459    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:27.732032    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:28.231280    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:28.231386    4270 round_trippers.go:469] Request Headers:
I1003 20:28:28.231402    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:28.231410    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:28.234080    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:28.234170    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:28.731318    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:28.731333    4270 round_trippers.go:469] Request Headers:
I1003 20:28:28.731341    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:28.731345    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:28.733046    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:29.231191    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:29.231213    4270 round_trippers.go:469] Request Headers:
I1003 20:28:29.231225    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:29.231232    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:29.233649    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:29.731214    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:29.731250    4270 round_trippers.go:469] Request Headers:
I1003 20:28:29.731263    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:29.731270    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:29.733798    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:30.230924    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:30.230947    4270 round_trippers.go:469] Request Headers:
I1003 20:28:30.230959    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:30.230967    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:30.233298    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:30.731343    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:30.731355    4270 round_trippers.go:469] Request Headers:
I1003 20:28:30.731360    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:30.731364    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:30.732597    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:30.732648    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:31.230481    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:31.230493    4270 round_trippers.go:469] Request Headers:
I1003 20:28:31.230499    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:31.230502    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:31.231629    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:31.731897    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:31.731923    4270 round_trippers.go:469] Request Headers:
I1003 20:28:31.731935    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:31.731940    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:31.733833    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:32.230895    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:32.230916    4270 round_trippers.go:469] Request Headers:
I1003 20:28:32.230928    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:32.230935    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:32.233668    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:32.731082    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:32.731097    4270 round_trippers.go:469] Request Headers:
I1003 20:28:32.731104    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:32.731107    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:32.732409    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:33.230429    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:33.230470    4270 round_trippers.go:469] Request Headers:
I1003 20:28:33.230481    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:33.230485    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:33.231571    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:33.231626    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:33.731373    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:33.731394    4270 round_trippers.go:469] Request Headers:
I1003 20:28:33.731403    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:33.731409    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:33.733353    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:34.231638    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:34.231658    4270 round_trippers.go:469] Request Headers:
I1003 20:28:34.231670    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:34.231676    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:34.233780    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:34.730544    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:34.730563    4270 round_trippers.go:469] Request Headers:
I1003 20:28:34.730574    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:34.730582    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:34.732762    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:35.230632    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:35.230657    4270 round_trippers.go:469] Request Headers:
I1003 20:28:35.230669    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:35.230734    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:35.233443    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:35.233496    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:35.731070    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:35.731092    4270 round_trippers.go:469] Request Headers:
I1003 20:28:35.731102    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:35.731109    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:35.733386    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:36.230611    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:36.230626    4270 round_trippers.go:469] Request Headers:
I1003 20:28:36.230634    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:36.230639    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:36.232734    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:36.730643    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:36.730659    4270 round_trippers.go:469] Request Headers:
I1003 20:28:36.730667    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:36.730672    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:36.732126    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:37.230758    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:37.230782    4270 round_trippers.go:469] Request Headers:
I1003 20:28:37.230795    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:37.230799    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:37.233024    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:37.730561    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:37.730582    4270 round_trippers.go:469] Request Headers:
I1003 20:28:37.730593    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:37.730598    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:37.733546    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:37.733685    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:38.231401    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:38.231429    4270 round_trippers.go:469] Request Headers:
I1003 20:28:38.231440    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:38.231447    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:38.233972    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:38.731108    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:38.731129    4270 round_trippers.go:469] Request Headers:
I1003 20:28:38.731140    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:38.731145    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:38.733968    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:39.230753    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:39.230774    4270 round_trippers.go:469] Request Headers:
I1003 20:28:39.230785    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:39.230792    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:39.233153    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:39.731045    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:39.731064    4270 round_trippers.go:469] Request Headers:
I1003 20:28:39.731075    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:39.731082    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:39.733471    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:40.231911    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:40.231935    4270 round_trippers.go:469] Request Headers:
I1003 20:28:40.231963    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:40.231973    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:40.234788    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:40.234890    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:40.731267    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:40.731278    4270 round_trippers.go:469] Request Headers:
I1003 20:28:40.731285    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:40.731289    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:40.732572    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:41.230979    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:41.231000    4270 round_trippers.go:469] Request Headers:
I1003 20:28:41.231013    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:41.231022    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:41.233578    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:41.730635    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:41.730657    4270 round_trippers.go:469] Request Headers:
I1003 20:28:41.730667    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:41.730676    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:41.732983    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:42.231266    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:42.231293    4270 round_trippers.go:469] Request Headers:
I1003 20:28:42.231305    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:42.231310    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:42.233954    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:42.731139    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:42.731160    4270 round_trippers.go:469] Request Headers:
I1003 20:28:42.731172    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:42.731181    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:42.733673    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:42.733740    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:43.230731    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:43.230763    4270 round_trippers.go:469] Request Headers:
I1003 20:28:43.230844    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:43.230852    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:43.233356    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:43.731906    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:43.731925    4270 round_trippers.go:469] Request Headers:
I1003 20:28:43.731937    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:43.731957    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:43.734160    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:44.230797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:44.230818    4270 round_trippers.go:469] Request Headers:
I1003 20:28:44.230829    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:44.230835    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:44.233339    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:44.731420    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:44.731442    4270 round_trippers.go:469] Request Headers:
I1003 20:28:44.731453    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:44.731458    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:44.733874    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:44.733945    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:45.232160    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:45.232177    4270 round_trippers.go:469] Request Headers:
I1003 20:28:45.232183    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:45.232186    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:45.233551    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:45.731520    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:45.731541    4270 round_trippers.go:469] Request Headers:
I1003 20:28:45.731551    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:45.731559    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:45.733935    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:46.232056    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:46.232077    4270 round_trippers.go:469] Request Headers:
I1003 20:28:46.232088    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:46.232094    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:46.234289    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:46.732550    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:46.732568    4270 round_trippers.go:469] Request Headers:
I1003 20:28:46.732575    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:46.732577    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:46.733998    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:46.734054    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:47.231967    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:47.231992    4270 round_trippers.go:469] Request Headers:
I1003 20:28:47.232004    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:47.232011    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:47.233944    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:47.731125    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:47.731250    4270 round_trippers.go:469] Request Headers:
I1003 20:28:47.731269    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:47.731277    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:47.734424    4270 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I1003 20:28:48.230920    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:48.230934    4270 round_trippers.go:469] Request Headers:
I1003 20:28:48.230942    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:48.230946    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:48.232451    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:48.730710    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:48.730735    4270 round_trippers.go:469] Request Headers:
I1003 20:28:48.730746    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:48.730753    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:48.733085    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:49.231726    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:49.231747    4270 round_trippers.go:469] Request Headers:
I1003 20:28:49.231758    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:49.231765    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:49.233994    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:49.234070    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:49.731457    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:49.731475    4270 round_trippers.go:469] Request Headers:
I1003 20:28:49.731484    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:49.731488    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:49.733083    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:50.231771    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:50.231888    4270 round_trippers.go:469] Request Headers:
I1003 20:28:50.231905    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:50.231912    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:50.234472    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:50.730599    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:50.730611    4270 round_trippers.go:469] Request Headers:
I1003 20:28:50.730617    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:50.730620    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:50.732174    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:51.230675    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:51.230696    4270 round_trippers.go:469] Request Headers:
I1003 20:28:51.230730    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:51.230738    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:51.232700    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:51.731472    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:51.731495    4270 round_trippers.go:469] Request Headers:
I1003 20:28:51.731508    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:51.731514    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:51.734186    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:51.734259    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:52.231627    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:52.231648    4270 round_trippers.go:469] Request Headers:
I1003 20:28:52.231659    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:52.231666    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:52.234237    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:52.732421    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:52.732438    4270 round_trippers.go:469] Request Headers:
I1003 20:28:52.732445    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:52.732450    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:52.733983    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:53.230612    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:53.230624    4270 round_trippers.go:469] Request Headers:
I1003 20:28:53.230630    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:53.230633    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:53.232301    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:53.730935    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:53.730955    4270 round_trippers.go:469] Request Headers:
I1003 20:28:53.730963    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:53.730967    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:53.732912    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:54.230940    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:54.230952    4270 round_trippers.go:469] Request Headers:
I1003 20:28:54.230957    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:54.230961    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:54.232173    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:54.232229    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:54.730904    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:54.730917    4270 round_trippers.go:469] Request Headers:
I1003 20:28:54.730923    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:54.730926    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:54.732232    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:55.231866    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:55.231891    4270 round_trippers.go:469] Request Headers:
I1003 20:28:55.231903    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:55.231911    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:55.234452    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:55.731976    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:55.732053    4270 round_trippers.go:469] Request Headers:
I1003 20:28:55.732061    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:55.732065    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:55.735887    4270 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I1003 20:28:56.232277    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:56.232306    4270 round_trippers.go:469] Request Headers:
I1003 20:28:56.232396    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:56.232403    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:56.234657    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:56.234898    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:56.731011    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:56.731037    4270 round_trippers.go:469] Request Headers:
I1003 20:28:56.731048    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:56.731054    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:56.733184    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:57.231182    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:57.231198    4270 round_trippers.go:469] Request Headers:
I1003 20:28:57.231206    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:57.231211    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:57.233236    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:57.731224    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:57.731245    4270 round_trippers.go:469] Request Headers:
I1003 20:28:57.731256    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:57.731264    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:57.733575    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:58.230777    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:58.230799    4270 round_trippers.go:469] Request Headers:
I1003 20:28:58.230811    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:58.230816    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:58.232892    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:58.732260    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:58.732281    4270 round_trippers.go:469] Request Headers:
I1003 20:28:58.732351    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:58.732356    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:58.734189    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:28:58.734262    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:28:59.230777    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:59.230798    4270 round_trippers.go:469] Request Headers:
I1003 20:28:59.230809    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:59.230818    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:59.233270    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:28:59.730769    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:28:59.730794    4270 round_trippers.go:469] Request Headers:
I1003 20:28:59.730805    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:28:59.730812    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:28:59.733269    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:00.230780    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:00.230792    4270 round_trippers.go:469] Request Headers:
I1003 20:29:00.230798    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:00.230801    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:00.232347    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:00.730983    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:00.731014    4270 round_trippers.go:469] Request Headers:
I1003 20:29:00.731051    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:00.731056    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:00.733531    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:01.231231    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:01.231251    4270 round_trippers.go:469] Request Headers:
I1003 20:29:01.231262    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:01.231271    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:01.233620    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:01.233685    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:01.730741    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:01.730754    4270 round_trippers.go:469] Request Headers:
I1003 20:29:01.730761    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:01.730764    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:01.732092    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:02.231342    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:02.231360    4270 round_trippers.go:469] Request Headers:
I1003 20:29:02.231368    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:02.231373    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:02.233308    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:02.731349    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:02.731369    4270 round_trippers.go:469] Request Headers:
I1003 20:29:02.731380    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:02.731386    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:02.733943    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:03.230925    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:03.230941    4270 round_trippers.go:469] Request Headers:
I1003 20:29:03.230949    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:03.230953    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:03.232555    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:03.731783    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:03.731803    4270 round_trippers.go:469] Request Headers:
I1003 20:29:03.731815    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:03.731820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:03.733884    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:03.733952    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:04.231451    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:04.231471    4270 round_trippers.go:469] Request Headers:
I1003 20:29:04.231483    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:04.231490    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:04.233498    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:04.730710    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:04.730725    4270 round_trippers.go:469] Request Headers:
I1003 20:29:04.730731    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:04.730734    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:04.732337    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:05.232079    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:05.232104    4270 round_trippers.go:469] Request Headers:
I1003 20:29:05.232115    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:05.232122    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:05.234825    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:05.730852    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:05.730874    4270 round_trippers.go:469] Request Headers:
I1003 20:29:05.730886    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:05.730892    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:05.732855    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:06.231083    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:06.231099    4270 round_trippers.go:469] Request Headers:
I1003 20:29:06.231105    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:06.231112    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:06.232672    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:06.232727    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:06.731912    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:06.731934    4270 round_trippers.go:469] Request Headers:
I1003 20:29:06.731945    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:06.731951    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:06.734414    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:07.230816    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:07.230837    4270 round_trippers.go:469] Request Headers:
I1003 20:29:07.230849    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:07.230854    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:07.233117    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:07.730805    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:07.730820    4270 round_trippers.go:469] Request Headers:
I1003 20:29:07.730827    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:07.730830    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:07.732338    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:08.232752    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:08.232777    4270 round_trippers.go:469] Request Headers:
I1003 20:29:08.232788    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:08.232794    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:08.235181    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:08.235252    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:08.730928    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:08.730950    4270 round_trippers.go:469] Request Headers:
I1003 20:29:08.730961    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:08.730966    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:08.733170    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:09.231099    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:09.231115    4270 round_trippers.go:469] Request Headers:
I1003 20:29:09.231124    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:09.231128    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:09.232725    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:09.732391    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:09.732412    4270 round_trippers.go:469] Request Headers:
I1003 20:29:09.732424    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:09.732429    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:09.734645    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:10.232673    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:10.232701    4270 round_trippers.go:469] Request Headers:
I1003 20:29:10.232713    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:10.232721    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:10.235208    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:10.235378    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:10.730963    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:10.730977    4270 round_trippers.go:469] Request Headers:
I1003 20:29:10.730983    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:10.730987    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:10.732345    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:11.231530    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:11.231548    4270 round_trippers.go:469] Request Headers:
I1003 20:29:11.231559    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:11.231564    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:11.233791    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:11.731795    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:11.731815    4270 round_trippers.go:469] Request Headers:
I1003 20:29:11.731826    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:11.731833    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:11.733761    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:12.232069    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:12.232084    4270 round_trippers.go:469] Request Headers:
I1003 20:29:12.232090    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:12.232093    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:12.233585    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:12.730942    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:12.730980    4270 round_trippers.go:469] Request Headers:
I1003 20:29:12.730992    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:12.731001    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:12.733327    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:12.733412    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:13.232938    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:13.232959    4270 round_trippers.go:469] Request Headers:
I1003 20:29:13.232970    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:13.232977    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:13.235517    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:13.731079    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:13.731094    4270 round_trippers.go:469] Request Headers:
I1003 20:29:13.731146    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:13.731150    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:13.732808    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:14.232059    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:14.232084    4270 round_trippers.go:469] Request Headers:
I1003 20:29:14.232170    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:14.232180    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:14.234488    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:14.732858    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:14.732882    4270 round_trippers.go:469] Request Headers:
I1003 20:29:14.732897    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:14.732903    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:14.735359    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:14.735450    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:15.231453    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:15.231466    4270 round_trippers.go:469] Request Headers:
I1003 20:29:15.231472    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:15.231475    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:15.232825    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:15.731303    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:15.731324    4270 round_trippers.go:469] Request Headers:
I1003 20:29:15.731335    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:15.731341    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:15.733754    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:16.231076    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:16.231090    4270 round_trippers.go:469] Request Headers:
I1003 20:29:16.231101    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:16.231106    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:16.232957    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:16.731184    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:16.731243    4270 round_trippers.go:469] Request Headers:
I1003 20:29:16.731251    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:16.731255    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:16.732558    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:17.232420    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:17.232441    4270 round_trippers.go:469] Request Headers:
I1003 20:29:17.232453    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:17.232462    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:17.234767    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:17.234873    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:17.732236    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:17.732270    4270 round_trippers.go:469] Request Headers:
I1003 20:29:17.732367    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:17.732377    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:17.735229    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:18.230844    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:18.230857    4270 round_trippers.go:469] Request Headers:
I1003 20:29:18.230864    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:18.230867    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:18.232284    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:18.731757    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:18.731769    4270 round_trippers.go:469] Request Headers:
I1003 20:29:18.731775    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:18.731778    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:18.733283    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:19.231733    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:19.231752    4270 round_trippers.go:469] Request Headers:
I1003 20:29:19.231764    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:19.231771    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:19.234177    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:19.731169    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:19.731185    4270 round_trippers.go:469] Request Headers:
I1003 20:29:19.731191    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:19.731193    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:19.732566    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:19.732658    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:20.232245    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:20.232263    4270 round_trippers.go:469] Request Headers:
I1003 20:29:20.232275    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:20.232282    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:20.234604    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:20.732132    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:20.732252    4270 round_trippers.go:469] Request Headers:
I1003 20:29:20.732273    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:20.732286    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:20.734901    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:21.232569    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:21.232581    4270 round_trippers.go:469] Request Headers:
I1003 20:29:21.232587    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:21.232590    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:21.234142    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:21.731219    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:21.731240    4270 round_trippers.go:469] Request Headers:
I1003 20:29:21.731252    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:21.731259    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:21.733757    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:21.733832    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:22.231041    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:22.231066    4270 round_trippers.go:469] Request Headers:
I1003 20:29:22.231076    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:22.231111    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:22.233419    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:22.731332    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:22.731344    4270 round_trippers.go:469] Request Headers:
I1003 20:29:22.731350    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:22.731354    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:22.732904    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:23.232974    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:23.233001    4270 round_trippers.go:469] Request Headers:
I1003 20:29:23.233012    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:23.233017    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:23.235653    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:23.730978    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:23.730997    4270 round_trippers.go:469] Request Headers:
I1003 20:29:23.731008    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:23.731015    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:23.733190    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:24.232855    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:24.232868    4270 round_trippers.go:469] Request Headers:
I1003 20:29:24.232874    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:24.232878    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:24.234459    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:24.234519    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:24.730982    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:24.731009    4270 round_trippers.go:469] Request Headers:
I1003 20:29:24.731020    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:24.731026    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:24.733422    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:25.231159    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:25.231185    4270 round_trippers.go:469] Request Headers:
I1003 20:29:25.231196    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:25.231201    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:25.233737    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:25.731040    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:25.731109    4270 round_trippers.go:469] Request Headers:
I1003 20:29:25.731116    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:25.731119    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:25.732594    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:26.230940    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:26.230959    4270 round_trippers.go:469] Request Headers:
I1003 20:29:26.230969    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:26.230975    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:26.233201    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:26.732741    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:26.732761    4270 round_trippers.go:469] Request Headers:
I1003 20:29:26.732772    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:26.732778    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:26.735271    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:26.735370    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:27.231342    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:27.231356    4270 round_trippers.go:469] Request Headers:
I1003 20:29:27.231362    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:27.231366    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:27.232831    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:27.730975    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:27.730987    4270 round_trippers.go:469] Request Headers:
I1003 20:29:27.730993    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:27.730996    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:27.732308    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:28.231146    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:28.231167    4270 round_trippers.go:469] Request Headers:
I1003 20:29:28.231176    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:28.231181    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:28.233741    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:28.731640    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:28.731652    4270 round_trippers.go:469] Request Headers:
I1003 20:29:28.731658    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:28.731661    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:28.733200    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:29.231660    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:29.231679    4270 round_trippers.go:469] Request Headers:
I1003 20:29:29.231691    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:29.231700    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:29.233837    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:29.233905    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:29.732428    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:29.732450    4270 round_trippers.go:469] Request Headers:
I1003 20:29:29.732461    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:29.732468    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:29.735440    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:30.231407    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:30.231420    4270 round_trippers.go:469] Request Headers:
I1003 20:29:30.231426    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:30.231429    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:30.233000    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:30.731988    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:30.732007    4270 round_trippers.go:469] Request Headers:
I1003 20:29:30.732019    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:30.732025    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:30.734368    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:31.232235    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:31.232257    4270 round_trippers.go:469] Request Headers:
I1003 20:29:31.232270    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:31.232276    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:31.234773    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:31.234845    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:31.730993    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:31.731006    4270 round_trippers.go:469] Request Headers:
I1003 20:29:31.731015    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:31.731019    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:31.732672    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:32.231907    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:32.231927    4270 round_trippers.go:469] Request Headers:
I1003 20:29:32.231938    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:32.231947    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:32.234466    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:32.732132    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:32.732147    4270 round_trippers.go:469] Request Headers:
I1003 20:29:32.732154    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:32.732157    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:32.733755    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:33.232855    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:33.232872    4270 round_trippers.go:469] Request Headers:
I1003 20:29:33.232878    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:33.232882    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:33.234474    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:33.730987    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:33.730999    4270 round_trippers.go:469] Request Headers:
I1003 20:29:33.731005    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:33.731008    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:33.732372    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:33.732430    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:34.232187    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:34.232208    4270 round_trippers.go:469] Request Headers:
I1003 20:29:34.232221    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:34.232228    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:34.234721    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:34.732413    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:34.732426    4270 round_trippers.go:469] Request Headers:
I1003 20:29:34.732433    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:34.732435    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:34.733871    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:35.233179    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:35.233205    4270 round_trippers.go:469] Request Headers:
I1003 20:29:35.233223    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:35.233232    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:35.236144    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:35.731203    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:35.731218    4270 round_trippers.go:469] Request Headers:
I1003 20:29:35.731226    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:35.731232    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:35.733061    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:35.733139    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:36.232308    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:36.232320    4270 round_trippers.go:469] Request Headers:
I1003 20:29:36.232326    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:36.232328    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:36.233828    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:36.731403    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:36.731427    4270 round_trippers.go:469] Request Headers:
I1003 20:29:36.731440    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:36.731448    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:36.733668    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:37.231295    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:37.231323    4270 round_trippers.go:469] Request Headers:
I1003 20:29:37.231339    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:37.231347    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:37.233710    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:37.731098    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:37.731116    4270 round_trippers.go:469] Request Headers:
I1003 20:29:37.731122    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:37.731125    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:37.732581    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:38.232050    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:38.232069    4270 round_trippers.go:469] Request Headers:
I1003 20:29:38.232081    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:38.232086    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:38.234634    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:38.234702    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:38.732502    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:38.732525    4270 round_trippers.go:469] Request Headers:
I1003 20:29:38.732538    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:38.732543    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:38.735067    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:39.231797    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:39.231809    4270 round_trippers.go:469] Request Headers:
I1003 20:29:39.231816    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:39.231820    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:39.233353    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:39.731085    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:39.731108    4270 round_trippers.go:469] Request Headers:
I1003 20:29:39.731124    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:39.731130    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:39.733512    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:40.231228    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:40.231248    4270 round_trippers.go:469] Request Headers:
I1003 20:29:40.231259    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:40.231264    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:40.233329    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:40.731261    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:40.731273    4270 round_trippers.go:469] Request Headers:
I1003 20:29:40.731278    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:40.731281    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:40.733003    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:40.733064    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:41.231562    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:41.231582    4270 round_trippers.go:469] Request Headers:
I1003 20:29:41.231594    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:41.231602    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:41.234257    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:41.731439    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:41.731454    4270 round_trippers.go:469] Request Headers:
I1003 20:29:41.731462    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:41.731466    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:41.733033    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:42.230981    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:42.230994    4270 round_trippers.go:469] Request Headers:
I1003 20:29:42.231000    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:42.231003    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:42.232664    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:42.731843    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:42.731865    4270 round_trippers.go:469] Request Headers:
I1003 20:29:42.731875    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:42.731882    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:42.734555    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:42.734627    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:43.231596    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:43.231633    4270 round_trippers.go:469] Request Headers:
I1003 20:29:43.231644    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:43.231650    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:43.234401    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:43.732503    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:43.732517    4270 round_trippers.go:469] Request Headers:
I1003 20:29:43.732523    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:43.732526    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:43.734011    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:44.231049    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:44.231061    4270 round_trippers.go:469] Request Headers:
I1003 20:29:44.231067    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:44.231070    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:44.232509    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:44.731839    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:44.731865    4270 round_trippers.go:469] Request Headers:
I1003 20:29:44.731876    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:44.731882    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:44.734356    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:45.232122    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:45.232184    4270 round_trippers.go:469] Request Headers:
I1003 20:29:45.232192    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:45.232197    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:45.233762    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:45.233819    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:45.732467    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:45.732489    4270 round_trippers.go:469] Request Headers:
I1003 20:29:45.732502    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:45.732510    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:45.735038    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:46.231799    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:46.231817    4270 round_trippers.go:469] Request Headers:
I1003 20:29:46.231828    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:46.231834    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:46.234151    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:46.731222    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:46.731236    4270 round_trippers.go:469] Request Headers:
I1003 20:29:46.731242    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:46.731245    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:46.732711    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:47.232794    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:47.232820    4270 round_trippers.go:469] Request Headers:
I1003 20:29:47.232832    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:47.232838    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:47.235457    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:47.235530    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:47.732257    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:47.732285    4270 round_trippers.go:469] Request Headers:
I1003 20:29:47.732379    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:47.732388    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:47.734817    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:48.232601    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:48.232617    4270 round_trippers.go:469] Request Headers:
I1003 20:29:48.232624    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:48.232628    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:48.234187    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:48.731963    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:48.731984    4270 round_trippers.go:469] Request Headers:
I1003 20:29:48.731995    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:48.732000    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:48.734340    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:49.232352    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:49.232374    4270 round_trippers.go:469] Request Headers:
I1003 20:29:49.232387    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:49.232394    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:49.234725    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:49.732314    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:49.732328    4270 round_trippers.go:469] Request Headers:
I1003 20:29:49.732335    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:49.732339    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:49.733918    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:49.733980    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:50.233012    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:50.233036    4270 round_trippers.go:469] Request Headers:
I1003 20:29:50.233047    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:50.233053    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:50.235525    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:50.732659    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:50.732681    4270 round_trippers.go:469] Request Headers:
I1003 20:29:50.732692    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:50.732698    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:50.735518    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:51.231717    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:51.231729    4270 round_trippers.go:469] Request Headers:
I1003 20:29:51.231735    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:51.231738    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:51.233104    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:51.731221    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:51.731241    4270 round_trippers.go:469] Request Headers:
I1003 20:29:51.731253    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:51.731260    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:51.733464    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:52.231989    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:52.232011    4270 round_trippers.go:469] Request Headers:
I1003 20:29:52.232022    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:52.232031    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:52.234515    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:52.234586    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:52.731881    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:52.731895    4270 round_trippers.go:469] Request Headers:
I1003 20:29:52.731901    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:52.731904    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:52.733454    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:53.231369    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:53.231382    4270 round_trippers.go:469] Request Headers:
I1003 20:29:53.231388    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:53.231391    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:53.232441    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:53.732420    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:53.732449    4270 round_trippers.go:469] Request Headers:
I1003 20:29:53.732460    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:53.732466    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:53.734828    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:54.231849    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:54.231865    4270 round_trippers.go:469] Request Headers:
I1003 20:29:54.231871    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:54.231875    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:54.233191    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:54.731254    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:54.731273    4270 round_trippers.go:469] Request Headers:
I1003 20:29:54.731284    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:54.731291    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:54.733381    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:54.733479    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:55.231684    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:55.231705    4270 round_trippers.go:469] Request Headers:
I1003 20:29:55.231716    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:55.231722    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:55.234592    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:55.732726    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:55.732739    4270 round_trippers.go:469] Request Headers:
I1003 20:29:55.732745    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:55.732748    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:55.734256    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:56.231371    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:56.231393    4270 round_trippers.go:469] Request Headers:
I1003 20:29:56.231405    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:56.231410    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:56.234104    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:56.731934    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:56.732036    4270 round_trippers.go:469] Request Headers:
I1003 20:29:56.732052    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:56.732060    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:56.734421    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:56.734546    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:57.231183    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:57.231201    4270 round_trippers.go:469] Request Headers:
I1003 20:29:57.231208    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:57.231212    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:57.232834    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:57.732404    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:57.732430    4270 round_trippers.go:469] Request Headers:
I1003 20:29:57.732443    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:57.732528    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:57.734955    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:58.231265    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:58.231290    4270 round_trippers.go:469] Request Headers:
I1003 20:29:58.231300    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:58.231305    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:58.233722    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:58.731216    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:58.731229    4270 round_trippers.go:469] Request Headers:
I1003 20:29:58.731235    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:58.731238    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:58.732627    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:29:59.231315    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:59.231337    4270 round_trippers.go:469] Request Headers:
I1003 20:29:59.231349    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:59.231357    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:59.233913    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:29:59.233987    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:29:59.731316    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:29:59.731335    4270 round_trippers.go:469] Request Headers:
I1003 20:29:59.731345    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:29:59.731352    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:29:59.733764    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:00.232360    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:00.232406    4270 round_trippers.go:469] Request Headers:
I1003 20:30:00.232415    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:00.232419    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:00.233807    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:00.731168    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:00.731180    4270 round_trippers.go:469] Request Headers:
I1003 20:30:00.731186    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:00.731192    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:00.732706    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:01.231260    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:01.231276    4270 round_trippers.go:469] Request Headers:
I1003 20:30:01.231284    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:01.231289    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:01.233056    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:01.731351    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:01.731364    4270 round_trippers.go:469] Request Headers:
I1003 20:30:01.731374    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:01.731377    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:01.732849    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:01.732907    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:02.231445    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:02.231462    4270 round_trippers.go:469] Request Headers:
I1003 20:30:02.231492    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:02.231497    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:02.233220    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:02.731713    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:02.731725    4270 round_trippers.go:469] Request Headers:
I1003 20:30:02.731731    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:02.731734    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:02.733096    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:03.231532    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:03.231549    4270 round_trippers.go:469] Request Headers:
I1003 20:30:03.231555    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:03.231559    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:03.233082    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:03.732221    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:03.732241    4270 round_trippers.go:469] Request Headers:
I1003 20:30:03.732252    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:03.732260    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:03.734278    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:03.734348    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:04.231676    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:04.231697    4270 round_trippers.go:469] Request Headers:
I1003 20:30:04.231709    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:04.231714    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:04.233998    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:04.731277    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:04.731290    4270 round_trippers.go:469] Request Headers:
I1003 20:30:04.731297    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:04.731301    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:04.732636    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:05.231279    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:05.231299    4270 round_trippers.go:469] Request Headers:
I1003 20:30:05.231310    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:05.231315    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:05.233681    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:05.732855    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:05.732875    4270 round_trippers.go:469] Request Headers:
I1003 20:30:05.732886    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:05.732891    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:05.735097    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:05.735227    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:06.232341    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:06.232356    4270 round_trippers.go:469] Request Headers:
I1003 20:30:06.232362    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:06.232365    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:06.233922    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:06.732354    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:06.732374    4270 round_trippers.go:469] Request Headers:
I1003 20:30:06.732385    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:06.732391    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:06.734877    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:07.231642    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:07.231662    4270 round_trippers.go:469] Request Headers:
I1003 20:30:07.231675    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:07.231684    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:07.233679    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:07.731221    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:07.731238    4270 round_trippers.go:469] Request Headers:
I1003 20:30:07.731244    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:07.731248    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:07.732910    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:08.232549    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:08.232569    4270 round_trippers.go:469] Request Headers:
I1003 20:30:08.232581    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:08.232588    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:08.235147    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:08.235213    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:08.732677    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:08.732703    4270 round_trippers.go:469] Request Headers:
I1003 20:30:08.732715    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:08.732723    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:08.734966    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:09.231742    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:09.231755    4270 round_trippers.go:469] Request Headers:
I1003 20:30:09.231761    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:09.231764    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:09.233275    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:09.731634    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:09.731646    4270 round_trippers.go:469] Request Headers:
I1003 20:30:09.731671    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:09.731675    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:09.733484    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:10.232914    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:10.232939    4270 round_trippers.go:469] Request Headers:
I1003 20:30:10.232950    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:10.232956    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:10.235117    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:10.732065    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:10.732082    4270 round_trippers.go:469] Request Headers:
I1003 20:30:10.732088    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:10.732092    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:10.733684    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:10.733740    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:11.231497    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:11.231509    4270 round_trippers.go:469] Request Headers:
I1003 20:30:11.231515    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:11.231519    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:11.233054    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:11.732509    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:11.732528    4270 round_trippers.go:469] Request Headers:
I1003 20:30:11.732539    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:11.732546    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:11.734853    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:12.231284    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:12.231297    4270 round_trippers.go:469] Request Headers:
I1003 20:30:12.231303    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:12.231306    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:12.233522    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:12.732288    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:12.732390    4270 round_trippers.go:469] Request Headers:
I1003 20:30:12.732406    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:12.732414    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:12.734761    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:12.734831    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:13.232453    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:13.232474    4270 round_trippers.go:469] Request Headers:
I1003 20:30:13.232485    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:13.232493    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:13.235527    4270 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I1003 20:30:13.732804    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:13.732817    4270 round_trippers.go:469] Request Headers:
I1003 20:30:13.732823    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:13.732827    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:13.734370    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:14.231569    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:14.231589    4270 round_trippers.go:469] Request Headers:
I1003 20:30:14.231601    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:14.231608    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:14.233874    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:14.731348    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:14.731361    4270 round_trippers.go:469] Request Headers:
I1003 20:30:14.731367    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:14.731370    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:14.732703    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:15.233237    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:15.233250    4270 round_trippers.go:469] Request Headers:
I1003 20:30:15.233256    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:15.233258    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:15.234979    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:15.235062    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:15.732200    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:15.732222    4270 round_trippers.go:469] Request Headers:
I1003 20:30:15.732233    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:15.732240    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:15.734760    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:16.232713    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:16.232735    4270 round_trippers.go:469] Request Headers:
I1003 20:30:16.232744    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:16.232751    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:16.235276    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:16.731528    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:16.731541    4270 round_trippers.go:469] Request Headers:
I1003 20:30:16.731548    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:16.731551    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:16.733221    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:17.231417    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:17.231439    4270 round_trippers.go:469] Request Headers:
I1003 20:30:17.231451    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:17.231458    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:17.233735    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:17.732196    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:17.732227    4270 round_trippers.go:469] Request Headers:
I1003 20:30:17.732240    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:17.732248    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:17.734816    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:17.734942    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:18.232574    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:18.232587    4270 round_trippers.go:469] Request Headers:
I1003 20:30:18.232592    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:18.232596    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:18.234081    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:18.731956    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:18.731974    4270 round_trippers.go:469] Request Headers:
I1003 20:30:18.731986    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:18.731992    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:18.734435    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:19.231584    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:19.231605    4270 round_trippers.go:469] Request Headers:
I1003 20:30:19.231616    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:19.231623    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:19.234161    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:19.731340    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:19.731356    4270 round_trippers.go:469] Request Headers:
I1003 20:30:19.731365    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:19.731368    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:19.732924    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:20.232309    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:20.232323    4270 round_trippers.go:469] Request Headers:
I1003 20:30:20.232331    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:20.232336    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:20.234320    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:20.234372    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:20.732202    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:20.732224    4270 round_trippers.go:469] Request Headers:
I1003 20:30:20.732235    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:20.732241    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:20.734603    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:21.231291    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:21.231305    4270 round_trippers.go:469] Request Headers:
I1003 20:30:21.231311    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:21.231314    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:21.232972    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:21.731665    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:21.731771    4270 round_trippers.go:469] Request Headers:
I1003 20:30:21.731785    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:21.731791    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:21.734075    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:22.232102    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:22.232127    4270 round_trippers.go:469] Request Headers:
I1003 20:30:22.232137    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:22.232145    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:22.234617    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:22.234786    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:22.733045    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:22.733060    4270 round_trippers.go:469] Request Headers:
I1003 20:30:22.733066    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:22.733069    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:22.734499    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:23.232708    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:23.232726    4270 round_trippers.go:469] Request Headers:
I1003 20:30:23.232791    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:23.232798    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:23.234508    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:23.731687    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:23.731711    4270 round_trippers.go:469] Request Headers:
I1003 20:30:23.731723    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:23.731730    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:23.733917    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:24.232545    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:24.232558    4270 round_trippers.go:469] Request Headers:
I1003 20:30:24.232563    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:24.232567    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:24.234104    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:24.732327    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:24.732353    4270 round_trippers.go:469] Request Headers:
I1003 20:30:24.732366    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:24.732414    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:24.734473    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:24.734552    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:25.231489    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:25.231508    4270 round_trippers.go:469] Request Headers:
I1003 20:30:25.231520    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:25.231526    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:25.234054    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:25.732706    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:25.732727    4270 round_trippers.go:469] Request Headers:
I1003 20:30:25.732737    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:25.732741    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:25.734816    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:26.232105    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:26.232120    4270 round_trippers.go:469] Request Headers:
I1003 20:30:26.232128    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:26.232134    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:26.234096    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:26.731549    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:26.731575    4270 round_trippers.go:469] Request Headers:
I1003 20:30:26.731585    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:26.731593    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:26.734160    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:27.231481    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:27.231494    4270 round_trippers.go:469] Request Headers:
I1003 20:30:27.231500    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:27.231502    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:27.233073    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:27.233130    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:27.731554    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:27.731573    4270 round_trippers.go:469] Request Headers:
I1003 20:30:27.731585    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:27.731593    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:27.733697    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:28.232056    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:28.232074    4270 round_trippers.go:469] Request Headers:
I1003 20:30:28.232086    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:28.232091    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:28.234260    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:28.733366    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:28.733381    4270 round_trippers.go:469] Request Headers:
I1003 20:30:28.733388    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:28.733391    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:28.734759    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:29.232519    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:29.232540    4270 round_trippers.go:469] Request Headers:
I1003 20:30:29.232552    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:29.232558    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:29.235089    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:29.235160    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:29.731604    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:29.731623    4270 round_trippers.go:469] Request Headers:
I1003 20:30:29.731634    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:29.731641    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:29.733732    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:30.231656    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:30.231670    4270 round_trippers.go:469] Request Headers:
I1003 20:30:30.231676    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:30.231679    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:30.233220    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:30.732671    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:30.732695    4270 round_trippers.go:469] Request Headers:
I1003 20:30:30.732707    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:30.732714    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:30.735190    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:31.231836    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:31.231858    4270 round_trippers.go:469] Request Headers:
I1003 20:30:31.231871    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:31.231876    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:31.234421    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:31.732128    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:31.732142    4270 round_trippers.go:469] Request Headers:
I1003 20:30:31.732148    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:31.732150    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:31.733530    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:31.733583    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:32.232724    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:32.232746    4270 round_trippers.go:469] Request Headers:
I1003 20:30:32.232758    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:32.232767    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:32.235331    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:32.731529    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:32.731573    4270 round_trippers.go:469] Request Headers:
I1003 20:30:32.731583    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:32.731589    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:32.733363    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:33.233398    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:33.233411    4270 round_trippers.go:469] Request Headers:
I1003 20:30:33.233417    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:33.233421    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:33.234928    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:33.732506    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:33.732607    4270 round_trippers.go:469] Request Headers:
I1003 20:30:33.732622    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:33.732629    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:33.734767    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:33.734834    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:34.232600    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:34.232622    4270 round_trippers.go:469] Request Headers:
I1003 20:30:34.232635    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:34.232646    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:34.235484    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:34.732074    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:34.732090    4270 round_trippers.go:469] Request Headers:
I1003 20:30:34.732097    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:34.732101    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:34.733578    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:35.233627    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:35.233649    4270 round_trippers.go:469] Request Headers:
I1003 20:30:35.233661    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:35.233667    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:35.236124    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:35.732809    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:35.732835    4270 round_trippers.go:469] Request Headers:
I1003 20:30:35.732846    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:35.732853    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:35.735373    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:35.735546    4270 node_ready.go:53] error getting node "ha-214000-m02": nodes "ha-214000-m02" not found
I1003 20:30:36.231546    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:36.231562    4270 round_trippers.go:469] Request Headers:
I1003 20:30:36.231568    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:36.231571    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:36.233173    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:36.731923    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:36.731944    4270 round_trippers.go:469] Request Headers:
I1003 20:30:36.731957    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:36.731973    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:36.734508    4270 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1003 20:30:37.231652    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:37.231667    4270 round_trippers.go:469] Request Headers:
I1003 20:30:37.231674    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:37.231678    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:37.233677    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:37.732340    4270 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-214000-m02
I1003 20:30:37.732353    4270 round_trippers.go:469] Request Headers:
I1003 20:30:37.732360    4270 round_trippers.go:473]     Accept: application/json, */*
I1003 20:30:37.732363    4270 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I1003 20:30:37.733876    4270 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I1003 20:30:37.733941    4270 node_ready.go:38] duration metric: took 4m0.002595913s for node "ha-214000-m02" to be "Ready" ...
I1003 20:30:37.756809    4270 out.go:201] 
W1003 20:30:37.778376    4270 out.go:270] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
W1003 20:30:37.778389    4270 out.go:270] * 
* 
W1003 20:30:37.780960    4270 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 20:30:37.802240    4270 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-amd64 -p ha-214000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (354.595231ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:38.063315    4632 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:38.063644    4632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:38.063650    4632 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:38.063653    4632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:38.063843    4632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:30:38.064042    4632 out.go:352] Setting JSON to false
	I1003 20:30:38.064066    4632 mustload.go:65] Loading cluster: ha-214000
	I1003 20:30:38.064103    4632 notify.go:220] Checking for updates...
	I1003 20:30:38.064443    4632 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:38.064464    4632 status.go:174] checking status of ha-214000 ...
	I1003 20:30:38.064927    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.065007    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.076297    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51370
	I1003 20:30:38.076621    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.077022    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.077047    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.077300    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.077437    4632 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:30:38.077540    4632 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:38.077626    4632 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:30:38.078693    4632 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:30:38.078712    4632 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:38.078986    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.079008    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.089922    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51372
	I1003 20:30:38.090248    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.090581    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.090593    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.090805    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.090912    4632 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:30:38.091007    4632 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:38.091284    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.091305    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.102313    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51374
	I1003 20:30:38.102658    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.102986    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.102995    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.103267    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.103393    4632 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:30:38.103590    4632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:38.103608    4632 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:30:38.103711    4632 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:30:38.103795    4632 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:30:38.103895    4632 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:30:38.103982    4632 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:30:38.136079    4632 ssh_runner.go:195] Run: systemctl --version
	I1003 20:30:38.140712    4632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:38.152065    4632 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:38.152091    4632 api_server.go:166] Checking apiserver status ...
	I1003 20:30:38.152146    4632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:30:38.163327    4632 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:30:38.171116    4632 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:38.171202    4632 ssh_runner.go:195] Run: ls
	I1003 20:30:38.175386    4632 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:30:38.178765    4632 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:30:38.178777    4632 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:30:38.178805    4632 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:38.178820    4632 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:30:38.179099    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.179121    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.190440    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51378
	I1003 20:30:38.190750    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.191087    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.191105    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.191325    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.191443    4632 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:30:38.191530    4632 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:38.191622    4632 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:30:38.192695    4632 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:30:38.192704    4632 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:38.192983    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.193010    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.204181    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51380
	I1003 20:30:38.204508    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.204868    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.204878    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.205103    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.205215    4632 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:30:38.205313    4632 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:38.205605    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.205642    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.216715    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51382
	I1003 20:30:38.217020    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.217344    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.217366    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.217561    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.217669    4632 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:30:38.217802    4632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:38.217820    4632 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:30:38.217901    4632 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:30:38.217993    4632 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:30:38.218089    4632 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:30:38.218165    4632 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:30:38.248949    4632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:38.260354    4632 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:38.260367    4632 api_server.go:166] Checking apiserver status ...
	I1003 20:30:38.260417    4632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:30:38.270745    4632 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:38.270755    4632 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:30:38.270761    4632 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:38.270770    4632 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:30:38.271065    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.271088    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.282109    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51385
	I1003 20:30:38.282416    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.282777    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.282801    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.283035    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.283157    4632 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:30:38.283245    4632 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:38.283328    4632 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:30:38.284406    4632 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:30:38.284414    4632 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:38.284668    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.284697    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.295659    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51387
	I1003 20:30:38.295958    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.296279    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.296287    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.296524    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.296635    4632 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:30:38.296721    4632 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:38.296989    4632 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:38.297015    4632 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:38.307756    4632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51389
	I1003 20:30:38.308060    4632 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:38.308399    4632 main.go:141] libmachine: Using API Version  1
	I1003 20:30:38.308417    4632 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:38.308638    4632 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:38.308754    4632 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:30:38.308909    4632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:38.308923    4632 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:30:38.309005    4632 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:30:38.309089    4632 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:30:38.309182    4632 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:30:38.309251    4632 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:30:38.342685    4632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:38.354448    4632 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:30:38.357111    2003 retry.go:31] will retry after 853.481996ms: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (351.747486ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:39.272412    4643 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:39.272635    4643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:39.272641    4643 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:39.272645    4643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:39.272817    4643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:30:39.273006    4643 out.go:352] Setting JSON to false
	I1003 20:30:39.273029    4643 mustload.go:65] Loading cluster: ha-214000
	I1003 20:30:39.273072    4643 notify.go:220] Checking for updates...
	I1003 20:30:39.273392    4643 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:39.273414    4643 status.go:174] checking status of ha-214000 ...
	I1003 20:30:39.273836    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.273887    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.285307    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51393
	I1003 20:30:39.285666    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.286083    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.286096    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.286313    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.286435    4643 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:30:39.286516    4643 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:39.286584    4643 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:30:39.287678    4643 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:30:39.287697    4643 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:39.287947    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.287968    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.298946    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51395
	I1003 20:30:39.299264    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.299626    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.299648    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.299859    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.299974    4643 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:30:39.300070    4643 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:39.300326    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.300350    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.311101    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51397
	I1003 20:30:39.311402    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.311717    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.311727    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.311973    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.312075    4643 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:30:39.312273    4643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:39.312298    4643 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:30:39.312379    4643 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:30:39.312476    4643 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:30:39.312550    4643 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:30:39.312644    4643 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:30:39.344454    4643 ssh_runner.go:195] Run: systemctl --version
	I1003 20:30:39.349219    4643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:39.361163    4643 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:39.361195    4643 api_server.go:166] Checking apiserver status ...
	I1003 20:30:39.361250    4643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:30:39.372510    4643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:30:39.380111    4643 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:39.380165    4643 ssh_runner.go:195] Run: ls
	I1003 20:30:39.383538    4643 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:30:39.387413    4643 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:30:39.387425    4643 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:30:39.387431    4643 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:39.387441    4643 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:30:39.387698    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.387720    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.398800    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51401
	I1003 20:30:39.399117    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.399446    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.399467    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.399685    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.399802    4643 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:30:39.399882    4643 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:39.399956    4643 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:30:39.401062    4643 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:30:39.401071    4643 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:39.401361    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.401383    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.412511    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51403
	I1003 20:30:39.412836    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.413159    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.413170    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.413378    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.413494    4643 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:30:39.413594    4643 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:39.413847    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.413871    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.424855    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51405
	I1003 20:30:39.425305    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.425642    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.425655    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.425880    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.425996    4643 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:30:39.426140    4643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:39.426152    4643 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:30:39.426230    4643 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:30:39.426312    4643 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:30:39.426403    4643 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:30:39.426479    4643 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:30:39.456645    4643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:39.468003    4643 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:39.468016    4643 api_server.go:166] Checking apiserver status ...
	I1003 20:30:39.468063    4643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:30:39.478517    4643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:39.478529    4643 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:30:39.478534    4643 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:39.478544    4643 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:30:39.478822    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.478853    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.490123    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51408
	I1003 20:30:39.490439    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.490803    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.490824    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.491027    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.491118    4643 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:30:39.491204    4643 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:39.491275    4643 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:30:39.492378    4643 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:30:39.492387    4643 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:39.492650    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.492676    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.503830    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51410
	I1003 20:30:39.504198    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.504554    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.504562    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.504773    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.504868    4643 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:30:39.504959    4643 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:39.505245    4643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:39.505274    4643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:39.516195    4643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51412
	I1003 20:30:39.516522    4643 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:39.516833    4643 main.go:141] libmachine: Using API Version  1
	I1003 20:30:39.516842    4643 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:39.517064    4643 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:39.517175    4643 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:30:39.517323    4643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:39.517334    4643 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:30:39.517414    4643 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:30:39.517499    4643 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:30:39.517580    4643 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:30:39.517685    4643 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:30:39.550842    4643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:39.562178    4643 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:30:39.564790    2003 retry.go:31] will retry after 1.281262168s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (353.106206ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:40.908240    4654 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:40.908474    4654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:40.908479    4654 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:40.908483    4654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:40.908647    4654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:30:40.908823    4654 out.go:352] Setting JSON to false
	I1003 20:30:40.908845    4654 mustload.go:65] Loading cluster: ha-214000
	I1003 20:30:40.908887    4654 notify.go:220] Checking for updates...
	I1003 20:30:40.909146    4654 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:40.909168    4654 status.go:174] checking status of ha-214000 ...
	I1003 20:30:40.909610    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:40.909650    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:40.921010    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51416
	I1003 20:30:40.921336    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:40.921750    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:40.921778    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:40.921987    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:40.922105    4654 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:30:40.922189    4654 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:40.922257    4654 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:30:40.923366    4654 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:30:40.923385    4654 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:40.923633    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:40.923658    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:40.934506    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51418
	I1003 20:30:40.934828    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:40.935172    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:40.935187    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:40.935386    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:40.935494    4654 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:30:40.935579    4654 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:40.935840    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:40.935869    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:40.946575    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51420
	I1003 20:30:40.946889    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:40.947237    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:40.947255    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:40.947474    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:40.947598    4654 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:30:40.947757    4654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:40.947778    4654 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:30:40.947866    4654 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:30:40.947964    4654 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:30:40.948099    4654 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:30:40.948181    4654 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:30:40.980921    4654 ssh_runner.go:195] Run: systemctl --version
	I1003 20:30:40.985393    4654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:40.996306    4654 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:40.996331    4654 api_server.go:166] Checking apiserver status ...
	I1003 20:30:40.996381    4654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:30:41.007704    4654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:30:41.015118    4654 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:41.015168    4654 ssh_runner.go:195] Run: ls
	I1003 20:30:41.018445    4654 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:30:41.021776    4654 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:30:41.021790    4654 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:30:41.021796    4654 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:41.021813    4654 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:30:41.022124    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:41.022144    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:41.033354    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51424
	I1003 20:30:41.033686    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:41.034005    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:41.034013    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:41.034215    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:41.034335    4654 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:30:41.034421    4654 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:41.034487    4654 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:30:41.035590    4654 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:30:41.035599    4654 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:41.035847    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:41.035868    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:41.046890    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51426
	I1003 20:30:41.047233    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:41.047630    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:41.047646    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:41.047865    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:41.047962    4654 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:30:41.048050    4654 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:41.048311    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:41.048335    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:41.059036    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51428
	I1003 20:30:41.059320    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:41.059677    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:41.059697    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:41.059930    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:41.060054    4654 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:30:41.060201    4654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:41.060217    4654 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:30:41.060321    4654 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:30:41.060418    4654 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:30:41.060510    4654 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:30:41.060599    4654 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:30:41.091715    4654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:41.102615    4654 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:41.102629    4654 api_server.go:166] Checking apiserver status ...
	I1003 20:30:41.102689    4654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:30:41.112262    4654 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:41.112272    4654 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:30:41.112277    4654 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:41.112286    4654 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:30:41.112563    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:41.112585    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:41.123612    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51431
	I1003 20:30:41.123948    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:41.124267    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:41.124277    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:41.124513    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:41.124634    4654 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:30:41.124721    4654 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:41.124798    4654 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:30:41.125931    4654 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:30:41.125940    4654 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:41.126183    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:41.126208    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:41.136942    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51433
	I1003 20:30:41.137348    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:41.137700    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:41.137712    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:41.137925    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:41.138033    4654 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:30:41.138116    4654 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:41.138382    4654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:41.138402    4654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:41.149424    4654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51435
	I1003 20:30:41.149750    4654 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:41.150104    4654 main.go:141] libmachine: Using API Version  1
	I1003 20:30:41.150122    4654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:41.150348    4654 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:41.150459    4654 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:30:41.150606    4654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:41.150620    4654 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:30:41.150695    4654 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:30:41.150799    4654 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:30:41.150879    4654 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:30:41.150954    4654 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:30:41.183885    4654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:41.198276    4654 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:30:41.201172    2003 retry.go:31] will retry after 2.230386062s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (366.050625ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:43.495028    4665 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:43.495244    4665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:43.495250    4665 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:43.495254    4665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:43.495433    4665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:30:43.495615    4665 out.go:352] Setting JSON to false
	I1003 20:30:43.495640    4665 mustload.go:65] Loading cluster: ha-214000
	I1003 20:30:43.495676    4665 notify.go:220] Checking for updates...
	I1003 20:30:43.495994    4665 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:43.496015    4665 status.go:174] checking status of ha-214000 ...
	I1003 20:30:43.496426    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.496465    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.507962    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51439
	I1003 20:30:43.508299    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.508742    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.508752    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.509024    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.509169    4665 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:30:43.509292    4665 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:43.509347    4665 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:30:43.510501    4665 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:30:43.510520    4665 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:43.510782    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.510808    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.522040    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51441
	I1003 20:30:43.522382    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.522791    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.522815    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.523037    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.523147    4665 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:30:43.523258    4665 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:43.523554    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.523584    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.534827    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51443
	I1003 20:30:43.535158    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.535563    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.535579    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.535830    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.535943    4665 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:30:43.536106    4665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:43.536127    4665 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:30:43.536205    4665 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:30:43.536295    4665 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:30:43.536384    4665 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:30:43.536471    4665 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:30:43.569292    4665 ssh_runner.go:195] Run: systemctl --version
	I1003 20:30:43.573745    4665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:43.585435    4665 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:43.585462    4665 api_server.go:166] Checking apiserver status ...
	I1003 20:30:43.585517    4665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:30:43.596526    4665 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:30:43.604618    4665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:43.604672    4665 ssh_runner.go:195] Run: ls
	I1003 20:30:43.607893    4665 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:30:43.611736    4665 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:30:43.611746    4665 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:30:43.611752    4665 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:43.611766    4665 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:30:43.612019    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.612043    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.623044    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51447
	I1003 20:30:43.623486    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.623863    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.623879    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.624119    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.624224    4665 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:30:43.624302    4665 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:43.624380    4665 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:30:43.625522    4665 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:30:43.625532    4665 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:43.625778    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.625805    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.636601    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51449
	I1003 20:30:43.636941    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.637297    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.637314    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.637530    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.637632    4665 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:30:43.637737    4665 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:43.638005    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.638030    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.648759    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51451
	I1003 20:30:43.649066    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.649436    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.649451    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.649641    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.649763    4665 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:30:43.649906    4665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:43.649916    4665 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:30:43.650000    4665 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:30:43.650079    4665 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:30:43.650166    4665 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:30:43.650237    4665 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:30:43.681862    4665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:43.695679    4665 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:43.695695    4665 api_server.go:166] Checking apiserver status ...
	I1003 20:30:43.695748    4665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:30:43.711041    4665 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:43.711059    4665 status.go:463] ha-214000-m02 apiserver status = Running (err=<nil>)
	I1003 20:30:43.711065    4665 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:43.711074    4665 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:30:43.711360    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.711382    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.722684    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51454
	I1003 20:30:43.723017    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.723379    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.723389    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.723607    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.723723    4665 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:30:43.723799    4665 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:43.723884    4665 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:30:43.725031    4665 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:30:43.725041    4665 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:43.725312    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.725342    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.736166    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51456
	I1003 20:30:43.736480    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.736826    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.736841    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.737070    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.737177    4665 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:30:43.737254    4665 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:43.737513    4665 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:43.737547    4665 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:43.748303    4665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51458
	I1003 20:30:43.748622    4665 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:43.748958    4665 main.go:141] libmachine: Using API Version  1
	I1003 20:30:43.748972    4665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:43.749213    4665 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:43.749338    4665 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:30:43.749490    4665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:43.749504    4665 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:30:43.749597    4665 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:30:43.749682    4665 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:30:43.749766    4665 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:30:43.749842    4665 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:30:43.785194    4665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:43.796690    4665 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:30:43.800087    2003 retry.go:31] will retry after 1.863537697s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (360.031588ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:45.723234    4676 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:45.723979    4676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:45.723988    4676 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:45.723995    4676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:45.724554    4676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:30:45.724757    4676 out.go:352] Setting JSON to false
	I1003 20:30:45.724783    4676 mustload.go:65] Loading cluster: ha-214000
	I1003 20:30:45.724820    4676 notify.go:220] Checking for updates...
	I1003 20:30:45.725107    4676 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:45.725126    4676 status.go:174] checking status of ha-214000 ...
	I1003 20:30:45.725524    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.725572    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.737235    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51462
	I1003 20:30:45.737679    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.738114    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.738129    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.738388    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.738486    4676 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:30:45.738573    4676 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:45.738652    4676 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:30:45.739782    4676 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:30:45.739800    4676 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:45.740065    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.740087    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.750937    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51464
	I1003 20:30:45.751261    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.751589    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.751600    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.751826    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.751937    4676 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:30:45.752031    4676 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:45.752319    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.752349    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.763197    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51466
	I1003 20:30:45.763523    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.763893    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.763911    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.764126    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.764249    4676 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:30:45.764418    4676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:45.764437    4676 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:30:45.764534    4676 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:30:45.764630    4676 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:30:45.764732    4676 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:30:45.764824    4676 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:30:45.797960    4676 ssh_runner.go:195] Run: systemctl --version
	I1003 20:30:45.802459    4676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:45.814272    4676 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:45.814298    4676 api_server.go:166] Checking apiserver status ...
	I1003 20:30:45.814349    4676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:30:45.825498    4676 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:30:45.832933    4676 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:45.832989    4676 ssh_runner.go:195] Run: ls
	I1003 20:30:45.836199    4676 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:30:45.839427    4676 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:30:45.839444    4676 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:30:45.839452    4676 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:45.839462    4676 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:30:45.839719    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.839740    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.850835    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51470
	I1003 20:30:45.851174    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.851519    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.851533    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.851720    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.851850    4676 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:30:45.851965    4676 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:45.852006    4676 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:30:45.853129    4676 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:30:45.853137    4676 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:45.853401    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.853422    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.864399    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51472
	I1003 20:30:45.864743    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.865091    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.865101    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.865329    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.865468    4676 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:30:45.865565    4676 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:45.865816    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.865842    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.876641    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51474
	I1003 20:30:45.876949    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.877326    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.877341    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.877565    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.877678    4676 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:30:45.877835    4676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:45.877847    4676 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:30:45.877925    4676 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:30:45.878004    4676 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:30:45.878091    4676 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:30:45.878171    4676 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:30:45.909421    4676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:45.925292    4676 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:45.925306    4676 api_server.go:166] Checking apiserver status ...
	I1003 20:30:45.925387    4676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:30:45.937754    4676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:45.937769    4676 status.go:463] ha-214000-m02 apiserver status = Running (err=<nil>)
	I1003 20:30:45.937774    4676 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:45.937787    4676 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:30:45.938076    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.938098    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.949166    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51477
	I1003 20:30:45.949571    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.949965    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.949988    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.950236    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.950356    4676 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:30:45.950475    4676 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:45.950539    4676 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:30:45.951682    4676 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:30:45.951693    4676 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:45.951950    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.951985    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.962957    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51479
	I1003 20:30:45.963298    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.963633    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.963648    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.963890    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.963994    4676 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:30:45.964079    4676 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:45.964338    4676 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:45.964361    4676 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:45.975249    4676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51481
	I1003 20:30:45.975579    4676 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:45.975930    4676 main.go:141] libmachine: Using API Version  1
	I1003 20:30:45.975949    4676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:45.976183    4676 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:45.976307    4676 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:30:45.976459    4676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:45.976472    4676 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:30:45.976579    4676 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:30:45.976665    4676 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:30:45.976755    4676 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:30:45.976840    4676 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:30:46.010469    4676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:46.021851    4676 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:30:46.024566    2003 retry.go:31] will retry after 3.514265486s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (363.529305ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:49.601021    4688 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:49.601273    4688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:49.601279    4688 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:49.601283    4688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:49.601455    4688 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:30:49.601653    4688 out.go:352] Setting JSON to false
	I1003 20:30:49.601675    4688 mustload.go:65] Loading cluster: ha-214000
	I1003 20:30:49.601713    4688 notify.go:220] Checking for updates...
	I1003 20:30:49.602043    4688 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:49.602063    4688 status.go:174] checking status of ha-214000 ...
	I1003 20:30:49.602491    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.602531    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.613941    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51485
	I1003 20:30:49.614381    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.614828    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.614841    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.615079    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.615174    4688 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:30:49.615272    4688 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:49.615343    4688 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:30:49.616491    4688 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:30:49.616511    4688 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:49.616758    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.616782    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.627667    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51487
	I1003 20:30:49.627990    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.628346    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.628357    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.628559    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.628678    4688 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:30:49.628770    4688 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:49.629027    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.629058    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.639987    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51489
	I1003 20:30:49.640313    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.640670    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.640686    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.640931    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.641048    4688 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:30:49.641205    4688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:49.641226    4688 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:30:49.641323    4688 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:30:49.641417    4688 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:30:49.641504    4688 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:30:49.641598    4688 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:30:49.675829    4688 ssh_runner.go:195] Run: systemctl --version
	I1003 20:30:49.681546    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:49.694974    4688 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:49.695002    4688 api_server.go:166] Checking apiserver status ...
	I1003 20:30:49.695047    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:30:49.708349    4688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:30:49.718160    4688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:49.718233    4688 ssh_runner.go:195] Run: ls
	I1003 20:30:49.721906    4688 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:30:49.725235    4688 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:30:49.725250    4688 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:30:49.725257    4688 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:49.725268    4688 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:30:49.725541    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.725563    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.736743    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51493
	I1003 20:30:49.737054    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.737383    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.737407    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.737627    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.737739    4688 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:30:49.737813    4688 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:49.737888    4688 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:30:49.739020    4688 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:30:49.739030    4688 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:49.739300    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.739323    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.750143    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51495
	I1003 20:30:49.750504    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.750806    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.750814    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.751039    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.751150    4688 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:30:49.751233    4688 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:49.751516    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.751543    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.762431    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51497
	I1003 20:30:49.762771    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.763111    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.763126    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.763339    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.763443    4688 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:30:49.763577    4688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:49.763588    4688 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:30:49.763679    4688 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:30:49.763772    4688 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:30:49.763868    4688 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:30:49.763956    4688 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:30:49.794445    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:49.804951    4688 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:49.804966    4688 api_server.go:166] Checking apiserver status ...
	I1003 20:30:49.805017    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:30:49.814593    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:49.814603    4688 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:30:49.814609    4688 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:49.814621    4688 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:30:49.814888    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.814911    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.826118    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51500
	I1003 20:30:49.826444    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.826777    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.826788    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.827009    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.827117    4688 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:30:49.827201    4688 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:49.827275    4688 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:30:49.828407    4688 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:30:49.828414    4688 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:49.828667    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.828696    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.839519    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51502
	I1003 20:30:49.839869    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.840216    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.840232    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.840469    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.840594    4688 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:30:49.840681    4688 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:49.840939    4688 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:49.840963    4688 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:49.851810    4688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51504
	I1003 20:30:49.852143    4688 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:49.852505    4688 main.go:141] libmachine: Using API Version  1
	I1003 20:30:49.852524    4688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:49.852782    4688 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:49.852901    4688 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:30:49.853050    4688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:49.853062    4688 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:30:49.853168    4688 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:30:49.853272    4688 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:30:49.853368    4688 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:30:49.853454    4688 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:30:49.889417    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:49.900850    4688 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:30:49.903459    2003 retry.go:31] will retry after 6.277839012s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (362.72718ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:56.243631    4699 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:56.243870    4699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:56.243876    4699 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:56.243879    4699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:56.244053    4699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:30:56.244237    4699 out.go:352] Setting JSON to false
	I1003 20:30:56.244260    4699 mustload.go:65] Loading cluster: ha-214000
	I1003 20:30:56.244300    4699 notify.go:220] Checking for updates...
	I1003 20:30:56.244602    4699 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:56.244625    4699 status.go:174] checking status of ha-214000 ...
	I1003 20:30:56.245036    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.245092    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.256547    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51508
	I1003 20:30:56.257008    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.257416    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.257427    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.257638    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.257749    4699 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:30:56.257830    4699 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:56.257897    4699 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:30:56.258999    4699 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:30:56.259018    4699 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:56.259277    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.259306    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.270076    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51510
	I1003 20:30:56.270405    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.270763    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.270772    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.271062    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.271192    4699 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:30:56.271299    4699 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:30:56.271565    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.271594    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.282469    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51512
	I1003 20:30:56.282783    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.283149    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.283172    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.283450    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.283591    4699 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:30:56.283769    4699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:56.283791    4699 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:30:56.283879    4699 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:30:56.283966    4699 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:30:56.284049    4699 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:30:56.284139    4699 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:30:56.317036    4699 ssh_runner.go:195] Run: systemctl --version
	I1003 20:30:56.321279    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:56.333497    4699 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:56.333523    4699 api_server.go:166] Checking apiserver status ...
	I1003 20:30:56.333579    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:30:56.345204    4699 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:30:56.353074    4699 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:56.353138    4699 ssh_runner.go:195] Run: ls
	I1003 20:30:56.356270    4699 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:30:56.360002    4699 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:30:56.360014    4699 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:30:56.360020    4699 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:56.360030    4699 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:30:56.360291    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.360313    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.371202    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51516
	I1003 20:30:56.371525    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.371899    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.371913    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.372124    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.372244    4699 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:30:56.372324    4699 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:56.372394    4699 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:30:56.373499    4699 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:30:56.373508    4699 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:56.373759    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.373792    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.384523    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51518
	I1003 20:30:56.384840    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.385196    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.385211    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.385436    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.385559    4699 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:30:56.385655    4699 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:30:56.385925    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.385953    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.396674    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51520
	I1003 20:30:56.397001    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.397326    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.397339    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.397554    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.397676    4699 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:30:56.397819    4699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:56.397830    4699 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:30:56.397911    4699 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:30:56.397999    4699 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:30:56.398094    4699 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:30:56.398183    4699 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:30:56.432932    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:56.446258    4699 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:30:56.446274    4699 api_server.go:166] Checking apiserver status ...
	I1003 20:30:56.446331    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:30:56.457639    4699 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:30:56.457651    4699 status.go:463] ha-214000-m02 apiserver status = Running (err=<nil>)
	I1003 20:30:56.457656    4699 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:56.457671    4699 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:30:56.457970    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.457990    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.468906    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51523
	I1003 20:30:56.469235    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.469553    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.469567    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.469815    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.469938    4699 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:30:56.470036    4699 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:30:56.470101    4699 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:30:56.471192    4699 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:30:56.471201    4699 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:56.471461    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.471490    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.482353    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51525
	I1003 20:30:56.482661    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.483010    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.483023    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.483226    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.483329    4699 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:30:56.483418    4699 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:30:56.483676    4699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:30:56.483699    4699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:30:56.494413    4699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51527
	I1003 20:30:56.494733    4699 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:30:56.495105    4699 main.go:141] libmachine: Using API Version  1
	I1003 20:30:56.495123    4699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:30:56.495359    4699 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:30:56.495472    4699 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:30:56.495627    4699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:30:56.495638    4699 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:30:56.495721    4699 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:30:56.495816    4699 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:30:56.495898    4699 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:30:56.495997    4699 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:30:56.531803    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:30:56.543193    4699 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:30:56.545748    2003 retry.go:31] will retry after 5.964824626s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (354.6629ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:31:02.572979    4711 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:31:02.573313    4711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:31:02.573318    4711 out.go:358] Setting ErrFile to fd 2...
	I1003 20:31:02.573322    4711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:31:02.573511    4711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:31:02.573712    4711 out.go:352] Setting JSON to false
	I1003 20:31:02.573734    4711 mustload.go:65] Loading cluster: ha-214000
	I1003 20:31:02.573774    4711 notify.go:220] Checking for updates...
	I1003 20:31:02.574114    4711 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:31:02.574132    4711 status.go:174] checking status of ha-214000 ...
	I1003 20:31:02.574568    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.574613    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.586286    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51532
	I1003 20:31:02.586682    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.587092    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.587123    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.587348    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.587473    4711 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:31:02.587561    4711 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:02.587637    4711 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:31:02.588707    4711 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:31:02.588724    4711 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:31:02.588967    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.588986    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.599857    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51534
	I1003 20:31:02.600202    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.600562    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.600581    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.600824    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.600943    4711 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:31:02.601052    4711 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:31:02.601308    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.601330    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.612050    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51536
	I1003 20:31:02.612384    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.612757    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.612773    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.612985    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.613118    4711 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:31:02.613286    4711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:31:02.613309    4711 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:31:02.613416    4711 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:31:02.613508    4711 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:31:02.613609    4711 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:31:02.613733    4711 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:31:02.646865    4711 ssh_runner.go:195] Run: systemctl --version
	I1003 20:31:02.651214    4711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:31:02.663024    4711 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:31:02.663050    4711 api_server.go:166] Checking apiserver status ...
	I1003 20:31:02.663101    4711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:31:02.675153    4711 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:31:02.682442    4711 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:31:02.682497    4711 ssh_runner.go:195] Run: ls
	I1003 20:31:02.685692    4711 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:31:02.689830    4711 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:31:02.689841    4711 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:31:02.689847    4711 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:31:02.689858    4711 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:31:02.690155    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.690175    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.701417    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51540
	I1003 20:31:02.701747    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.702077    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.702088    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.702317    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.702447    4711 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:31:02.702546    4711 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:02.702640    4711 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:31:02.703747    4711 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:31:02.703756    4711 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:31:02.704030    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.704056    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.714911    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51542
	I1003 20:31:02.715247    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.715599    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.715612    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.715833    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.715949    4711 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:31:02.716037    4711 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:31:02.716308    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.716335    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.727209    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51544
	I1003 20:31:02.727516    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.727868    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.727890    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.728100    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.728210    4711 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:31:02.728363    4711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:31:02.728375    4711 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:31:02.728457    4711 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:31:02.728534    4711 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:31:02.728621    4711 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:31:02.728694    4711 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:31:02.759631    4711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:31:02.770020    4711 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:31:02.770035    4711 api_server.go:166] Checking apiserver status ...
	I1003 20:31:02.770095    4711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:31:02.779661    4711 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:31:02.779672    4711 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:31:02.779677    4711 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:31:02.779686    4711 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:31:02.779987    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.780013    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.791033    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51547
	I1003 20:31:02.791351    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.791694    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.791710    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.791925    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.792046    4711 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:31:02.792149    4711 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:02.792233    4711 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:31:02.793328    4711 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:31:02.793337    4711 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:31:02.793603    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.793629    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.804558    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51549
	I1003 20:31:02.804897    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.805255    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.805268    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.805522    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.805642    4711 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:31:02.805725    4711 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:31:02.806008    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:02.806033    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:02.816967    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51551
	I1003 20:31:02.817293    4711 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:02.817618    4711 main.go:141] libmachine: Using API Version  1
	I1003 20:31:02.817632    4711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:02.817857    4711 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:02.817961    4711 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:31:02.818137    4711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:31:02.818148    4711 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:31:02.818239    4711 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:31:02.818320    4711 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:31:02.818417    4711 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:31:02.818496    4711 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:31:02.852766    4711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:31:02.864631    4711 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:31:02.868072    2003 retry.go:31] will retry after 24.263177765s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 2 (353.179138ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:31:27.193349    4733 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:31:27.193595    4733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:31:27.193600    4733 out.go:358] Setting ErrFile to fd 2...
	I1003 20:31:27.193603    4733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:31:27.193800    4733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:31:27.193983    4733 out.go:352] Setting JSON to false
	I1003 20:31:27.194007    4733 mustload.go:65] Loading cluster: ha-214000
	I1003 20:31:27.194049    4733 notify.go:220] Checking for updates...
	I1003 20:31:27.194370    4733 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:31:27.194392    4733 status.go:174] checking status of ha-214000 ...
	I1003 20:31:27.194811    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.194864    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.206777    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51555
	I1003 20:31:27.207117    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.207533    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.207544    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.207757    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.207871    4733 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:31:27.207976    4733 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:27.208035    4733 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:31:27.209096    4733 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:31:27.209116    4733 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:31:27.209377    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.209400    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.220570    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51557
	I1003 20:31:27.220928    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.221265    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.221278    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.221536    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.221658    4733 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:31:27.221755    4733 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:31:27.222043    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.222072    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.233211    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51559
	I1003 20:31:27.233531    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.233893    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.233913    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.234115    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.234218    4733 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:31:27.234392    4733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:31:27.234414    4733 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:31:27.234495    4733 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:31:27.234579    4733 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:31:27.234649    4733 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:31:27.234730    4733 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:31:27.266481    4733 ssh_runner.go:195] Run: systemctl --version
	I1003 20:31:27.271172    4733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:31:27.282004    4733 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:31:27.282028    4733 api_server.go:166] Checking apiserver status ...
	I1003 20:31:27.282086    4733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:31:27.293132    4733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W1003 20:31:27.300398    4733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:31:27.300444    4733 ssh_runner.go:195] Run: ls
	I1003 20:31:27.303881    4733 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:31:27.307201    4733 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:31:27.307211    4733 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:31:27.307218    4733 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:31:27.307228    4733 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:31:27.307477    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.307497    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.318531    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51563
	I1003 20:31:27.318858    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.319175    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.319185    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.319391    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.319491    4733 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:31:27.319579    4733 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:27.319654    4733 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:31:27.320701    4733 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:31:27.320709    4733 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:31:27.320956    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.320981    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.331887    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51565
	I1003 20:31:27.332232    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.332566    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.332580    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.332789    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.332897    4733 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:31:27.333002    4733 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:31:27.333268    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.333290    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.344069    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51567
	I1003 20:31:27.344464    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.344771    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.344781    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.345022    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.345146    4733 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:31:27.345291    4733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:31:27.345306    4733 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:31:27.345393    4733 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:31:27.345475    4733 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:31:27.345566    4733 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:31:27.345674    4733 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:31:27.376946    4733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:31:27.388635    4733 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:31:27.388649    4733 api_server.go:166] Checking apiserver status ...
	I1003 20:31:27.388697    4733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:31:27.399243    4733 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:31:27.399252    4733 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:31:27.399257    4733 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:31:27.399265    4733 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:31:27.399540    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.399562    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.410632    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51570
	I1003 20:31:27.410965    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.411275    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.411291    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.411503    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.411608    4733 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:31:27.411687    4733 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:27.411750    4733 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:31:27.412794    4733 status.go:371] ha-214000-m03 host status = "Running" (err=<nil>)
	I1003 20:31:27.412803    4733 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:31:27.413053    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.413078    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.423900    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51572
	I1003 20:31:27.424250    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.424610    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.424620    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.424849    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.424965    4733 main.go:141] libmachine: (ha-214000-m03) Calling .GetIP
	I1003 20:31:27.425069    4733 host.go:66] Checking if "ha-214000-m03" exists ...
	I1003 20:31:27.425334    4733 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:27.425368    4733 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:27.436063    4733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51574
	I1003 20:31:27.436397    4733 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:27.436777    4733 main.go:141] libmachine: Using API Version  1
	I1003 20:31:27.436794    4733 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:27.437032    4733 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:27.437175    4733 main.go:141] libmachine: (ha-214000-m03) Calling .DriverName
	I1003 20:31:27.437327    4733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:31:27.437338    4733 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHHostname
	I1003 20:31:27.437424    4733 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHPort
	I1003 20:31:27.437521    4733 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHKeyPath
	I1003 20:31:27.437617    4733 main.go:141] libmachine: (ha-214000-m03) Calling .GetSSHUsername
	I1003 20:31:27.437692    4733 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m03/id_rsa Username:docker}
	I1003 20:31:27.471009    4733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:31:27.482862    4733 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.243814711s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node start m02 -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         19 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         19 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         19 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              19 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         19 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     19 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         19 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         19 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         19 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         19 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x3 over 19m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x3 over 19m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                19m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m8s
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x2 over 6m9s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x2 over 6m9s)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x2 over 6m9s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                 node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                5m39s                kubelet          Node ha-214000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:27:00.215476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-04T03:27:00.217042Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"1.236946ms","hash":1433174615,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2023424,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-04T03:27:00.217099Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1433174615,"revision":1514,"compact-revision":973}
	{"level":"info","ts":"2024-10-04T03:31:28.060845Z","caller":"traceutil/trace.go:171","msg":"trace[860112081] transaction","detail":"{read_only:false; response_revision:2637; number_of_response:1; }","duration":"112.489562ms","start":"2024-10-04T03:31:27.948335Z","end":"2024-10-04T03:31:28.060825Z","steps":["trace[860112081] 'process raft request'  (duration: 91.094323ms)","trace[860112081] 'compare'  (duration: 21.269614ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:31:29 up 19 min,  0 users,  load average: 0.14, 0.20, 0.18
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:30:23.498507       1 main.go:299] handling current node
	I1004 03:30:33.504932       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:33.505118       1 main.go:299] handling current node
	I1004 03:30:33.505159       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:33.505270       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:30:43.497209       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:43.497346       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:30:43.497580       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:43.497755       1 main.go:299] handling current node
	I1004 03:30:53.496402       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:53.496595       1 main.go:299] handling current node
	I1004 03:30:53.496647       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:53.496795       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:03.496468       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:03.496619       1 main.go:299] handling current node
	I1004 03:31:03.496645       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:03.496656       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:13.497200       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:13.497236       1 main.go:299] handling current node
	I1004 03:31:13.497252       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:13.497259       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:23.497508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:23.497727       1 main.go:299] handling current node
	I1004 03:31:23.497777       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:23.497873       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	I1004 03:26:22.202705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:28:54.315206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:31:28.798824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:27:04 ha-214000 kubelet[2148]: E1004 03:27:04.977774    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:27:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:27:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:27:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:27:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:28:04 ha-214000 kubelet[2148]: E1004 03:28:04.975288    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:28:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:28:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:28:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:28:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:29:04 ha-214000 kubelet[2148]: E1004 03:29:04.973888    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:29:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:29:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:29:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:29:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:30:04 ha-214000 kubelet[2148]: E1004 03:30:04.973795    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:30:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:30:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:30:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:30:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:31:04 ha-214000 kubelet[2148]: E1004 03:31:04.972847    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:31:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:31:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:31:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:31:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  7m55s (x3 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  25s (x3 over 5m40s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (314.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:305: expected profile "ha-214000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPor
t\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":\"
v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":
false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwar
ePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-214000" in json of 'profile list' to have "HAppy" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIS
erverPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVers
ion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-p
olicy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQem
uFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.198185615s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:23 PDT | 03 Oct 24 20:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node start m02 -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:11:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:11:29.362809    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:29.363039    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363044    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:29.363048    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:29.363235    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:29.365000    3786 out.go:352] Setting JSON to false
	I1003 20:11:29.396737    3786 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2459,"bootTime":1728009030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:29.396923    3786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:29.458044    3786 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:29.482012    3786 notify.go:220] Checking for updates...
	I1003 20:11:29.511063    3786 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:29.544230    3786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:29.589182    3786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:29.617478    3786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:29.637723    3786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.658836    3786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:29.680396    3786 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:29.712883    3786 out.go:177] * Using the hyperkit driver based on user configuration
	I1003 20:11:29.754745    3786 start.go:297] selected driver: hyperkit
	I1003 20:11:29.754773    3786 start.go:901] validating driver "hyperkit" against <nil>
	I1003 20:11:29.754794    3786 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:29.761531    3786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.761672    3786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:11:29.772272    3786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:11:29.778672    3786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.778698    3786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:11:29.778749    3786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:11:29.778983    3786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:11:29.779016    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:29.779048    3786 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:11:29.779054    3786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:11:29.779122    3786 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:29.779200    3786 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:11:29.821469    3786 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:11:29.842726    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:29.842805    3786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:11:29.842830    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:11:29.843054    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:11:29.843072    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:11:29.843588    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:29.843649    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json: {Name:mk4be269ea2d061f937392ef4273adf7c84c2b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:29.844134    3786 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:11:29.844228    3786 start.go:364] duration metric: took 81.809µs to acquireMachinesLock for "ha-214000"
	I1003 20:11:29.844257    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:11:29.844311    3786 start.go:125] createHost starting for "" (driver="hyperkit")
	I1003 20:11:29.865671    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:11:29.865889    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:29.865939    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:29.877337    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50857
	I1003 20:11:29.877653    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:29.878138    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:11:29.878153    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:29.878396    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:29.878525    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:29.878624    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:29.878732    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:11:29.878761    3786 client.go:168] LocalClient.Create starting
	I1003 20:11:29.878798    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:11:29.878859    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.878873    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.878937    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:11:29.878992    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:11:29.879004    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:11:29.879021    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:11:29.879030    3786 main.go:141] libmachine: (ha-214000) Calling .PreCreateCheck
	I1003 20:11:29.879111    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.879284    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:29.887131    3786 main.go:141] libmachine: Creating machine...
	I1003 20:11:29.887155    3786 main.go:141] libmachine: (ha-214000) Calling .Create
	I1003 20:11:29.887395    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:29.887708    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:29.887373    3795 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:29.887815    3786 main.go:141] libmachine: (ha-214000) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:11:30.068139    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.068058    3795 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa...
	I1003 20:11:30.278862    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.278767    3795 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk...
	I1003 20:11:30.278877    3786 main.go:141] libmachine: (ha-214000) DBG | Writing magic tar header
	I1003 20:11:30.278885    3786 main.go:141] libmachine: (ha-214000) DBG | Writing SSH key tar header
	I1003 20:11:30.279809    3786 main.go:141] libmachine: (ha-214000) DBG | I1003 20:11:30.279698    3795 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000 ...
	I1003 20:11:30.645016    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.645036    3786 main.go:141] libmachine: (ha-214000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:11:30.645049    3786 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:11:30.758974    3786 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:11:30.759004    3786 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:11:30.759058    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759114    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:11:30.759199    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:11:30.759251    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:11:30.759265    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:11:30.762136    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 DEBUG: hyperkit: Pid is 3798
	I1003 20:11:30.762560    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:11:30.762572    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:30.762695    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:30.763679    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:30.763800    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:30.763813    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:30.763843    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:30.763856    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:30.772475    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:11:30.827292    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:11:30.828065    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:30.828078    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:30.828093    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:30.828100    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.209152    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:11:31.209167    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:11:31.323757    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:11:31.323779    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:11:31.323806    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:11:31.323818    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:11:31.324662    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:11:31.324674    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:11:32.765982    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 1
	I1003 20:11:32.766001    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:32.766128    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:32.767005    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:32.767043    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:32.767052    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:32.767062    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:32.767069    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:34.767585    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 2
	I1003 20:11:34.767598    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:34.767703    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:34.768723    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:34.768771    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:34.768778    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:34.768784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:34.768792    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.770456    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 3
	I1003 20:11:36.770473    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:36.770579    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:36.771418    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:36.771473    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:36.771484    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:36.771492    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:36.771497    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:36.913399    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:11:36.913426    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:11:36.913431    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:11:36.937041    3786 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:11:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:11:38.772950    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 4
	I1003 20:11:38.772983    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:38.773049    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:38.773921    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:38.773974    3786 main.go:141] libmachine: (ha-214000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1003 20:11:38.773983    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:11:38.773992    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:11:38.774000    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:11:40.775677    3786 main.go:141] libmachine: (ha-214000) DBG | Attempt 5
	I1003 20:11:40.775692    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.775831    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.776724    3786 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:11:40.776773    3786 main.go:141] libmachine: (ha-214000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:11:40.776784    3786 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:11:40.776794    3786 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:11:40.776801    3786 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:11:40.776862    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:40.777484    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777602    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:40.777679    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:11:40.777685    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:11:40.777774    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:11:40.777826    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:11:40.778695    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:11:40.778706    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:11:40.778718    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:11:40.778723    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:40.778816    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:40.778906    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779027    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:40.779118    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:40.779259    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:40.779432    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:40.779439    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:11:41.839065    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:41.839077    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:11:41.839091    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.839222    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.839316    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839421    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.839515    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.839663    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.839797    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.839804    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:11:41.897050    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:11:41.897106    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:11:41.897113    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:11:41.897118    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897266    3786 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:11:41.897276    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:41.897390    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.897513    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.897625    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897725    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.897839    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.897975    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.898112    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.898120    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:11:41.967120    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:11:41.967137    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:41.967277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:41.967364    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967453    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:41.967544    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:41.967712    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:41.967856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:41.967867    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:11:42.030964    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:11:42.030986    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:11:42.031000    3786 buildroot.go:174] setting up certificates
	I1003 20:11:42.031007    3786 provision.go:84] configureAuth start
	I1003 20:11:42.031014    3786 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:11:42.031167    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:42.031275    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.031378    3786 provision.go:143] copyHostCerts
	I1003 20:11:42.031408    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031462    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:11:42.031470    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:11:42.031605    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:11:42.031821    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031851    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:11:42.031856    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:11:42.031954    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:11:42.032126    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032173    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:11:42.032187    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:11:42.032271    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:11:42.032418    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:11:42.124154    3786 provision.go:177] copyRemoteCerts
	I1003 20:11:42.124217    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:11:42.124231    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.124348    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.124432    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.124546    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.124649    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:42.159782    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:11:42.159849    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:11:42.179616    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:11:42.179691    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1003 20:11:42.199564    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:11:42.199631    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:11:42.219065    3786 provision.go:87] duration metric: took 188.042387ms to configureAuth
	I1003 20:11:42.219079    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:11:42.219212    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:42.219225    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:42.219371    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.219460    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.219551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219635    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.219719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.219851    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.219981    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.219988    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:11:42.278308    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:11:42.278321    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:11:42.278394    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:11:42.278407    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.278574    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.278685    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278782    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.278866    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.279035    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.279173    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.279220    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:11:42.346853    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:11:42.346875    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:42.347034    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:42.347132    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347226    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:42.347318    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:42.347460    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:42.347597    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:42.347609    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:11:43.902760    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:11:43.902776    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:11:43.902783    3786 main.go:141] libmachine: (ha-214000) Calling .GetURL
	I1003 20:11:43.902935    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:11:43.902943    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:11:43.902948    3786 client.go:171] duration metric: took 14.024062587s to LocalClient.Create
	I1003 20:11:43.902960    3786 start.go:167] duration metric: took 14.024109938s to libmachine.API.Create "ha-214000"
	I1003 20:11:43.902972    3786 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:11:43.902980    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:11:43.902993    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:43.903149    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:11:43.903160    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.903253    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.903346    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.903433    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.903528    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:43.945012    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:11:43.948628    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:11:43.948642    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:11:43.948752    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:11:43.948975    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:11:43.948981    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:11:43.949234    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:11:43.965860    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:43.992168    3786 start.go:296] duration metric: took 89.185018ms for postStartSetup
	I1003 20:11:43.992203    3786 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:11:43.992837    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:43.992990    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:11:43.993351    3786 start.go:128] duration metric: took 14.148907915s to createHost
	I1003 20:11:43.993366    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:43.993456    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:43.993554    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993642    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:43.993730    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:43.993842    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:11:43.993973    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:11:43.993979    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:11:44.052340    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011503.815868228
	
	I1003 20:11:44.052351    3786 fix.go:216] guest clock: 1728011503.815868228
	I1003 20:11:44.052356    3786 fix.go:229] Guest: 2024-10-03 20:11:43.815868228 -0700 PDT Remote: 2024-10-03 20:11:43.993359 -0700 PDT m=+14.679072523 (delta=-177.490772ms)
	I1003 20:11:44.052373    3786 fix.go:200] guest clock delta is within tolerance: -177.490772ms
	I1003 20:11:44.052376    3786 start.go:83] releasing machines lock for "ha-214000", held for 14.208019818s
	I1003 20:11:44.052394    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.052537    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:44.052648    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053036    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053145    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:11:44.053249    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:11:44.053277    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053338    3786 ssh_runner.go:195] Run: cat /version.json
	I1003 20:11:44.053349    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:11:44.053380    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053468    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053492    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:11:44.053583    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:11:44.053627    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053710    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.053719    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:11:44.053817    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:11:44.085365    3786 ssh_runner.go:195] Run: systemctl --version
	I1003 20:11:44.134772    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:11:44.139682    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:11:44.139732    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:11:44.153495    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:11:44.153508    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.153607    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.169251    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:11:44.178311    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:11:44.187159    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.187209    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:11:44.197076    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.206346    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:11:44.215582    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:11:44.224625    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:11:44.233664    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:11:44.243554    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:11:44.252493    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:11:44.261406    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:11:44.269454    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:11:44.269503    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:11:44.278432    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:11:44.286531    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.381983    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:11:44.399822    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:11:44.399916    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:11:44.412834    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.424027    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:11:44.437617    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:11:44.449318    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.460487    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:11:44.535826    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:11:44.545855    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:11:44.560593    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:11:44.563476    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:11:44.571344    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:11:44.584658    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:11:44.694822    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:11:44.802349    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:11:44.802421    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:11:44.817157    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:44.915581    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:47.239275    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323655104s)
	I1003 20:11:47.239351    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:11:47.249649    3786 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:11:47.262714    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.273947    3786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:11:47.372796    3786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:11:47.487013    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.600780    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:11:47.615148    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:11:47.625336    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:47.721931    3786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:11:47.782968    3786 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:11:47.783071    3786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:11:47.787249    3786 start.go:563] Will wait 60s for crictl version
	I1003 20:11:47.787310    3786 ssh_runner.go:195] Run: which crictl
	I1003 20:11:47.790314    3786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:11:47.819111    3786 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:11:47.819194    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.834589    3786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:11:47.902545    3786 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:11:47.902593    3786 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:11:47.903055    3786 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:11:47.907567    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:47.917430    3786 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:11:47.917492    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:11:47.917567    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:47.929258    3786 docker.go:685] Got preloaded images: 
	I1003 20:11:47.929271    3786 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1003 20:11:47.929334    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:47.936736    3786 ssh_runner.go:195] Run: which lz4
	I1003 20:11:47.939620    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 20:11:47.939750    3786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:11:47.942738    3786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:11:47.942754    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1003 20:11:48.943681    3786 docker.go:649] duration metric: took 1.003977987s to copy over tarball
	I1003 20:11:48.943756    3786 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:11:51.140513    3786 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.196722875s)
	I1003 20:11:51.140535    3786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:11:51.165020    3786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:11:51.172845    3786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1003 20:11:51.186674    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:51.288663    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:11:53.618189    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329487379s)
	I1003 20:11:53.618289    3786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:11:53.631579    3786 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:11:53.631602    3786 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:11:53.631611    3786 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:11:53.631705    3786 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:11:53.631789    3786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:11:53.665778    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:11:53.665791    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:11:53.665805    3786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:11:53.665821    3786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:11:53.665913    3786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:11:53.665936    3786 kube-vip.go:115] generating kube-vip config ...
	I1003 20:11:53.666004    3786 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:11:53.678865    3786 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:11:53.678932    3786 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:11:53.678999    3786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:11:53.686416    3786 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:11:53.686471    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:11:53.694050    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:11:53.708150    3786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:11:53.721836    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:11:53.735734    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1003 20:11:53.749347    3786 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:11:53.752201    3786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:11:53.761633    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:11:53.859658    3786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:11:53.874246    3786 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:11:53.874258    3786 certs.go:194] generating shared ca certs ...
	I1003 20:11:53.874268    3786 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:53.874489    3786 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:11:53.874577    3786 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:11:53.874589    3786 certs.go:256] generating profile certs ...
	I1003 20:11:53.874634    3786 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:11:53.874645    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt with IP's: []
	I1003 20:11:54.048183    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt ...
	I1003 20:11:54.048206    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt: {Name:mk023cc3e7fa68e6dcb882808d5343d8262504b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048557    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key ...
	I1003 20:11:54.048564    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key: {Name:mkce4cfcb194ca17647eed9e9372d9da4a279217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.048823    3786 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4
	I1003 20:11:54.048838    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1003 20:11:54.176449    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 ...
	I1003 20:11:54.176464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4: {Name:mkd4c17fc71aeb52648f2e0fb4d9b266bb268cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.176834    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 ...
	I1003 20:11:54.176842    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4: {Name:mk5901abc37b171c59db7ffc9d76e0e216a71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.177118    3786 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:11:54.177337    3786 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.59da18a4 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:11:54.177555    3786 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:11:54.177568    3786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt with IP's: []
	I1003 20:11:54.364041    3786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt ...
	I1003 20:11:54.364056    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt: {Name:mkd4bfbfe53f743b1a7b7ac268d32fd14fb385ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.364451    3786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key ...
	I1003 20:11:54.364464    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key: {Name:mkdf0cc4929c3bad958c72a9482f31a2fa8ca5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:11:54.365504    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:11:54.365536    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:11:54.365558    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:11:54.365579    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:11:54.365604    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:11:54.365624    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:11:54.365642    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:11:54.365660    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:11:54.365764    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:11:54.365825    3786 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:11:54.365834    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:11:54.365869    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:11:54.365901    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:11:54.365930    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:11:54.365996    3786 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:11:54.366029    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.366050    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.366067    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.366583    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:11:54.387341    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:11:54.408328    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:11:54.428305    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:11:54.448988    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:11:54.468864    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:11:54.489657    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:11:54.509220    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:11:54.543480    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:11:54.569312    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:11:54.591217    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:11:54.611661    3786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:11:54.625312    3786 ssh_runner.go:195] Run: openssl version
	I1003 20:11:54.629547    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:11:54.638699    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642108    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.642156    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:11:54.646534    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:11:54.656587    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:11:54.665837    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669269    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.669318    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:11:54.673589    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:11:54.682875    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:11:54.692884    3786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696371    3786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.696422    3786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:11:54.700720    3786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:11:54.710045    3786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:11:54.713181    3786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 20:11:54.713227    3786 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:54.713324    3786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:11:54.724733    3786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:11:54.733940    3786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:11:54.742148    3786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:11:54.750391    3786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:11:54.750399    3786 kubeadm.go:157] found existing configuration files:
	
	I1003 20:11:54.750445    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 20:11:54.758360    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:11:54.758407    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:11:54.766728    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 20:11:54.774882    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:11:54.774945    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:11:54.783372    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.791591    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:11:54.791647    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:11:54.799758    3786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 20:11:54.807693    3786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:11:54.807743    3786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:11:54.815816    3786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:11:54.877606    3786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1003 20:11:54.877675    3786 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:11:54.959181    3786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:11:54.959274    3786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:11:54.959345    3786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 20:11:54.969019    3786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:11:54.989516    3786 out.go:235]   - Generating certificates and keys ...
	I1003 20:11:54.989607    3786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:11:54.989657    3786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:11:55.160915    3786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 20:11:55.658052    3786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1003 20:11:55.863051    3786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1003 20:11:56.152829    3786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1003 20:11:56.418003    3786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1003 20:11:56.418108    3786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.602173    3786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1003 20:11:56.602276    3786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-214000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1003 20:11:56.828680    3786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 20:11:57.089912    3786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 20:11:57.268306    3786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1003 20:11:57.268429    3786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:11:57.485237    3786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:11:57.630473    3786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 20:11:57.894039    3786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:11:57.988077    3786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:11:58.196178    3786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:11:58.196611    3786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:11:58.198590    3786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:11:58.219895    3786 out.go:235]   - Booting up control plane ...
	I1003 20:11:58.219967    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:11:58.220030    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:11:58.220097    3786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:11:58.220176    3786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:11:58.220254    3786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:11:58.220295    3786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:11:58.335247    3786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 20:11:58.335351    3786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 20:11:58.836178    3786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.577412ms
	I1003 20:11:58.836270    3786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1003 20:12:04.924298    3786 kubeadm.go:310] [api-check] The API server is healthy after 6.092466477s
	I1003 20:12:04.932333    3786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:12:04.939254    3786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:12:04.952530    3786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:12:04.952686    3786 kubeadm.go:310] [mark-control-plane] Marking the node ha-214000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:12:04.963622    3786 kubeadm.go:310] [bootstrap-token] Using token: 65w425.f8d5bl0nx70l33q5
	I1003 20:12:05.000420    3786 out.go:235]   - Configuring RBAC rules ...
	I1003 20:12:05.000619    3786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:12:05.005208    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:12:05.009514    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:12:05.011672    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:12:05.014059    3786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:12:05.016458    3786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:12:05.328777    3786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:12:05.749204    3786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:12:06.330210    3786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:12:06.337153    3786 kubeadm.go:310] 
	I1003 20:12:06.337220    3786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:12:06.337230    3786 kubeadm.go:310] 
	I1003 20:12:06.337299    3786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:12:06.337307    3786 kubeadm.go:310] 
	I1003 20:12:06.337331    3786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:12:06.337776    3786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:12:06.337822    3786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:12:06.337829    3786 kubeadm.go:310] 
	I1003 20:12:06.337882    3786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:12:06.337892    3786 kubeadm.go:310] 
	I1003 20:12:06.337934    3786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:12:06.337941    3786 kubeadm.go:310] 
	I1003 20:12:06.337982    3786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:12:06.338049    3786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:12:06.338103    3786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:12:06.338108    3786 kubeadm.go:310] 
	I1003 20:12:06.338176    3786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:12:06.338234    3786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:12:06.338239    3786 kubeadm.go:310] 
	I1003 20:12:06.338302    3786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338378    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f \
	I1003 20:12:06.338392    3786 kubeadm.go:310] 	--control-plane 
	I1003 20:12:06.338398    3786 kubeadm.go:310] 
	I1003 20:12:06.338468    3786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:12:06.338475    3786 kubeadm.go:310] 
	I1003 20:12:06.338540    3786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65w425.f8d5bl0nx70l33q5 \
	I1003 20:12:06.338627    3786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d64e6604fe6e768694ceb0ccc582ba45cdc8c10482ccbc36ed78eb10a99f781f 
	I1003 20:12:06.339477    3786 kubeadm.go:310] W1004 03:11:54.647192    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339711    3786 kubeadm.go:310] W1004 03:11:54.647683    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1003 20:12:06.339794    3786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:12:06.339812    3786 cni.go:84] Creating CNI manager for ""
	I1003 20:12:06.339818    3786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:12:06.364248    3786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1003 20:12:06.384154    3786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 20:12:06.389314    3786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1003 20:12:06.389326    3786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 20:12:06.403845    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 20:12:06.644756    3786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:12:06.644819    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:06.644831    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-214000 minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-214000 minikube.k8s.io/primary=true
	I1003 20:12:06.678801    3786 ops.go:34] apiserver oom_adj: -16
	I1003 20:12:06.797815    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.299794    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:07.799440    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.298098    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.798091    3786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:12:08.868821    3786 kubeadm.go:1113] duration metric: took 2.22404909s to wait for elevateKubeSystemPrivileges
	I1003 20:12:08.868840    3786 kubeadm.go:394] duration metric: took 14.155498781s to StartCluster
	I1003 20:12:08.868860    3786 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.868967    3786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.869440    3786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:12:08.869685    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 20:12:08.869688    3786 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:08.869699    3786 start.go:241] waiting for startup goroutines ...
	I1003 20:12:08.869718    3786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:12:08.869758    3786 addons.go:69] Setting storage-provisioner=true in profile "ha-214000"
	I1003 20:12:08.869767    3786 addons.go:69] Setting default-storageclass=true in profile "ha-214000"
	I1003 20:12:08.869771    3786 addons.go:234] Setting addon storage-provisioner=true in "ha-214000"
	I1003 20:12:08.869785    3786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-214000"
	I1003 20:12:08.869791    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.869842    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:08.870068    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870072    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.870087    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.870092    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.882104    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
	I1003 20:12:08.882168    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50881
	I1003 20:12:08.882441    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882511    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.882804    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882813    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.882881    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.882894    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.883090    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883156    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.883274    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.883355    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.883419    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.883537    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.883565    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.885691    3786 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:12:08.885941    3786 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf11ff60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:12:08.886340    3786 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:12:08.886491    3786 addons.go:234] Setting addon default-storageclass=true in "ha-214000"
	I1003 20:12:08.886512    3786 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:12:08.886748    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.886773    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.895065    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I1003 20:12:08.895465    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.895823    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.895839    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.896078    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.896217    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.896327    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.896393    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.897460    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.897738    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I1003 20:12:08.898004    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.898325    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.898335    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.898555    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.898949    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:08.898981    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:08.910022    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50888
	I1003 20:12:08.910377    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:08.910694    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:08.910704    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:08.910903    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:08.911015    3786 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:12:08.911119    3786 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:08.911181    3786 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:12:08.912221    3786 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:12:08.912359    3786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:08.912372    3786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:12:08.912383    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.912463    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.912551    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.912632    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.912736    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.921020    3786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:12:08.941783    3786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:08.941796    3786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:12:08.941814    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:12:08.941988    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:12:08.942111    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:12:08.942220    3786 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:12:08.942315    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:12:08.948587    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 20:12:08.968723    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:12:09.030678    3786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:12:09.190561    3786 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1003 20:12:09.190584    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190594    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190806    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190814    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.190813    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190821    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.190825    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.190961    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.190963    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.190971    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.191022    3786 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:12:09.191042    3786 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:12:09.191115    3786 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1003 20:12:09.191120    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.191127    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.191130    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.197441    3786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1003 20:12:09.197844    3786 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1003 20:12:09.197850    3786 round_trippers.go:469] Request Headers:
	I1003 20:12:09.197856    3786 round_trippers.go:473]     Accept: application/json, */*
	I1003 20:12:09.197859    3786 round_trippers.go:473]     Content-Type: application/json
	I1003 20:12:09.197861    3786 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1003 20:12:09.200093    3786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1003 20:12:09.200221    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.200229    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.200380    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.200388    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.200401    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420599    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420612    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420780    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420789    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420797    3786 main.go:141] libmachine: Making call to close driver server
	I1003 20:12:09.420805    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.420817    3786 main.go:141] libmachine: (ha-214000) Calling .Close
	I1003 20:12:09.420947    3786 main.go:141] libmachine: Successfully made call to close driver server
	I1003 20:12:09.420955    3786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1003 20:12:09.420965    3786 main.go:141] libmachine: (ha-214000) DBG | Closing plugin on server side
	I1003 20:12:09.458210    3786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 20:12:09.516157    3786 addons.go:510] duration metric: took 646.439579ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 20:12:09.516193    3786 start.go:246] waiting for cluster config update ...
	I1003 20:12:09.516203    3786 start.go:255] writing updated cluster config ...
	I1003 20:12:09.553177    3786 out.go:201] 
	I1003 20:12:09.590690    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:09.590792    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.629267    3786 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:12:09.687367    3786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:12:09.687434    3786 cache.go:56] Caching tarball of preloaded images
	I1003 20:12:09.687623    3786 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:12:09.687652    3786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:12:09.687739    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:09.688487    3786 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:12:09.688583    3786 start.go:364] duration metric: took 74.25µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:12:09.688611    3786 start.go:93] Provisioning new machine with config: &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:12:09.688697    3786 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1003 20:12:09.710254    3786 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:12:09.710398    3786 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:12:09.710440    3786 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:12:09.722684    3786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
	I1003 20:12:09.723000    3786 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:12:09.723373    3786 main.go:141] libmachine: Using API Version  1
	I1003 20:12:09.723400    3786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:12:09.723601    3786 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:12:09.723713    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:09.723791    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:09.723880    3786 start.go:159] libmachine.API.Create for "ha-214000" (driver="hyperkit")
	I1003 20:12:09.723897    3786 client.go:168] LocalClient.Create starting
	I1003 20:12:09.723925    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem
	I1003 20:12:09.723964    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.723975    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724014    3786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem
	I1003 20:12:09.724041    3786 main.go:141] libmachine: Decoding PEM data...
	I1003 20:12:09.724051    3786 main.go:141] libmachine: Parsing certificate...
	I1003 20:12:09.724063    3786 main.go:141] libmachine: Running pre-create checks...
	I1003 20:12:09.724068    3786 main.go:141] libmachine: (ha-214000-m02) Calling .PreCreateCheck
	I1003 20:12:09.724130    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.724178    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:09.747760    3786 main.go:141] libmachine: Creating machine...
	I1003 20:12:09.747784    3786 main.go:141] libmachine: (ha-214000-m02) Calling .Create
	I1003 20:12:09.748061    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:09.748381    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.748022    3810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:12:09.748488    3786 main.go:141] libmachine: (ha-214000-m02) Downloading /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1003 20:12:09.939210    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.939114    3810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa...
	I1003 20:12:09.982756    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.982682    3810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk...
	I1003 20:12:09.982772    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing magic tar header
	I1003 20:12:09.982780    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Writing SSH key tar header
	I1003 20:12:09.983244    3786 main.go:141] libmachine: (ha-214000-m02) DBG | I1003 20:12:09.983203    3810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02 ...
	I1003 20:12:10.347153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.347170    3786 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:12:10.347184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:12:10.373871    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:12:10.373906    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:12:10.373960    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374001    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:12:10.374045    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:12:10.374118    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:12:10.374140    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:12:10.377384    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 DEBUG: hyperkit: Pid is 3812
	I1003 20:12:10.378552    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:12:10.378569    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:10.378721    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:10.380020    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:10.380124    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:10.380141    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:10.380184    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:10.380218    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:10.380246    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:10.388120    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:12:10.396804    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:12:10.397803    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.397835    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.397859    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.397872    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.791117    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:12:10.791134    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:12:10.905939    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:12:10.905955    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:12:10.905962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:12:10.905970    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:12:10.906767    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:12:10.906796    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:12:12.380443    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 1
	I1003 20:12:12.380457    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:12.380542    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:12.381414    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:12.381488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:12.381501    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:12.381522    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:12.381530    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:12.381537    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:14.382014    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 2
	I1003 20:12:14.382027    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:14.382153    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:14.383028    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:14.383072    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:14.383084    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:14.383094    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:14.383099    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:14.383106    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.384877    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 3
	I1003 20:12:16.384900    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:16.385047    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:16.385949    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:16.385962    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:16.385973    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:16.385980    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:16.385989    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:16.385997    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:16.494183    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:12:16.494197    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:12:16.494205    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:12:16.517584    3786 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:12:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:12:18.386111    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 4
	I1003 20:12:18.386127    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:18.386159    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:18.387097    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:18.387143    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1003 20:12:18.387155    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:12:18.387166    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:9a:10:8f:a8:67:89 ID:1,9a:10:8f:a8:67:89 Lease:0x66ff69f8}
	I1003 20:12:18.387171    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a2:54:a5:30:e9:eb ID:1,a2:54:a5:30:e9:eb Lease:0x66ff5b45}
	I1003 20:12:18.387199    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:aa:4f:82:ed:f6:1a ID:1,aa:4f:82:ed:f6:1a Lease:0x66ff65a3}
	I1003 20:12:20.387619    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 5
	I1003 20:12:20.387646    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.387818    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.389410    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:12:20.389488    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1003 20:12:20.389507    3786 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff6b23}
	I1003 20:12:20.389547    3786 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:12:20.389560    3786 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:12:20.389593    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:20.390522    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390685    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.390851    3786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1003 20:12:20.390863    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:12:20.390987    3786 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:12:20.391075    3786 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 3812
	I1003 20:12:20.391992    3786 main.go:141] libmachine: Detecting operating system of created instance...
	I1003 20:12:20.392000    3786 main.go:141] libmachine: Waiting for SSH to be available...
	I1003 20:12:20.392004    3786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 20:12:20.392008    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.392122    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.392209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392307    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.392400    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.392530    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.392724    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.392731    3786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 20:12:20.452090    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.452103    3786 main.go:141] libmachine: Detecting the provisioner...
	I1003 20:12:20.452108    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.452238    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.452347    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452451    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.452545    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.452708    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.452856    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.452863    3786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1003 20:12:20.512517    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1003 20:12:20.512548    3786 main.go:141] libmachine: found compatible host: buildroot
	I1003 20:12:20.512554    3786 main.go:141] libmachine: Provisioning with buildroot...
	I1003 20:12:20.512559    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512722    3786 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:12:20.512733    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.512827    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.512906    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.512996    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513080    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.513172    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.513298    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.513426    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.513434    3786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:12:20.587617    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:12:20.587633    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.587778    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.587891    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.587992    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.588072    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.588212    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.588355    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.588372    3786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:12:20.652353    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:12:20.652368    3786 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:12:20.652378    3786 buildroot.go:174] setting up certificates
	I1003 20:12:20.652383    3786 provision.go:84] configureAuth start
	I1003 20:12:20.652389    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:12:20.652547    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:20.652658    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.652742    3786 provision.go:143] copyHostCerts
	I1003 20:12:20.652775    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652820    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:12:20.652826    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:12:20.652958    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:12:20.653166    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653196    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:12:20.653200    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:12:20.653269    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:12:20.653430    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653464    3786 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:12:20.653469    3786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:12:20.653545    3786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:12:20.653731    3786 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:12:20.813360    3786 provision.go:177] copyRemoteCerts
	I1003 20:12:20.813426    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:12:20.813443    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.813586    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.813742    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.813877    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.813990    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:20.850961    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:12:20.851030    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:12:20.870933    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:12:20.871004    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:12:20.890912    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:12:20.890980    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:12:20.910707    3786 provision.go:87] duration metric: took 258.313535ms to configureAuth
	I1003 20:12:20.910721    3786 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:12:20.910855    3786 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:12:20.910868    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:20.911013    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.911107    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.911209    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911296    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.911388    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.911514    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.911640    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.911647    3786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:12:20.972465    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:12:20.972479    3786 buildroot.go:70] root file system type: tmpfs
	I1003 20:12:20.972563    3786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:12:20.972574    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:20.972721    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:20.972833    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.972918    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:20.973006    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:20.973168    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:20.973302    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:20.973343    3786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:12:21.045078    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:12:21.045095    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:21.045231    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:21.045325    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045411    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:21.045498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:21.045650    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:21.045779    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:21.045791    3786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:12:22.600224    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:12:22.600251    3786 main.go:141] libmachine: Checking connection to Docker...
	I1003 20:12:22.600259    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetURL
	I1003 20:12:22.600435    3786 main.go:141] libmachine: Docker is up and running!
	I1003 20:12:22.600444    3786 main.go:141] libmachine: Reticulating splines...
	I1003 20:12:22.600448    3786 client.go:171] duration metric: took 12.876436808s to LocalClient.Create
	I1003 20:12:22.600461    3786 start.go:167] duration metric: took 12.876472047s to libmachine.API.Create "ha-214000"
	I1003 20:12:22.600467    3786 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:12:22.600477    3786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:12:22.600492    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.600668    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:12:22.600680    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.600780    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.600875    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.600970    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.601075    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.640337    3786 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:12:22.644093    3786 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:12:22.644109    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:12:22.644205    3786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:12:22.644348    3786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:12:22.644355    3786 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:12:22.644533    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:12:22.653756    3786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:12:22.685257    3786 start.go:296] duration metric: took 84.778887ms for postStartSetup
	I1003 20:12:22.685286    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:12:22.685929    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.686072    3786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:12:22.686412    3786 start.go:128] duration metric: took 12.997595621s to createHost
	I1003 20:12:22.686425    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.686515    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.686599    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686701    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.686798    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.686925    3786 main.go:141] libmachine: Using SSH client type: native
	I1003 20:12:22.687044    3786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda49d00] 0xda4c9e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:12:22.687051    3786 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:12:22.746950    3786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011542.860223875
	
	I1003 20:12:22.746961    3786 fix.go:216] guest clock: 1728011542.860223875
	I1003 20:12:22.746966    3786 fix.go:229] Guest: 2024-10-03 20:12:22.860223875 -0700 PDT Remote: 2024-10-03 20:12:22.68642 -0700 PDT m=+53.371804581 (delta=173.803875ms)
	I1003 20:12:22.746975    3786 fix.go:200] guest clock delta is within tolerance: 173.803875ms
	I1003 20:12:22.746981    3786 start.go:83] releasing machines lock for "ha-214000-m02", held for 13.05827693s
	I1003 20:12:22.746997    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.747135    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:12:22.778085    3786 out.go:177] * Found network options:
	I1003 20:12:22.800483    3786 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:12:22.822664    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.822715    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823572    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.823860    3786 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:12:22.824001    3786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:12:22.824059    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:12:22.824112    3786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:12:22.824226    3786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:12:22.824246    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:12:22.824266    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824461    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:12:22.824498    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824698    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:12:22.824710    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824930    3786 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:12:22.824941    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:12:22.825049    3786 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:12:22.859693    3786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:12:22.859773    3786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:12:22.904020    3786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:12:22.904042    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:22.904144    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:22.920874    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:12:22.929835    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:12:22.938804    3786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:12:22.938864    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:12:22.947739    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.956498    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:12:22.965426    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:12:22.974356    3786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:12:22.983537    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:12:22.992785    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:12:23.001725    3786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:12:23.010582    3786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:12:23.018552    3786 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:12:23.018604    3786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:12:23.027475    3786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:12:23.035507    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.134077    3786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:12:23.152266    3786 start.go:495] detecting cgroup driver to use...
	I1003 20:12:23.152354    3786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:12:23.172785    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.185099    3786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:12:23.205995    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:12:23.218258    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.229115    3786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:12:23.249868    3786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:12:23.260469    3786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:12:23.275378    3786 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:12:23.278400    3786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:12:23.285702    3786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:12:23.299048    3786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:12:23.399150    3786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:12:23.503720    3786 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:12:23.503742    3786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:12:23.518202    3786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:12:23.611158    3786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:13:24.628320    3786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016625719s)
	I1003 20:13:24.628401    3786 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:13:24.663942    3786 out.go:201] 
	W1003 20:13:24.684946    3786 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:12:21 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.468633045Z" level=info msg="Starting up"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469108028Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:12:21 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:21.469706025Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.487628336Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507060426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507109083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507155028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507165317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507227752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507260241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507394721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507469962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507482624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507490198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507543695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.507694842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509245321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509283836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509393702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509428322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509497593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.509564444Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512295530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512348164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512361609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512372978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512398663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512552113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512784704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512907443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512941496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512953831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512963149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512971718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512979908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512989204Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.512999030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513014510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513028295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513036764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513049969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513059659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513067451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513076268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513086500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513096857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513104724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513114315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513122553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513131705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513138978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513146306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513153866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513169849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513184278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513193112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513200420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513245032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513280524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513290487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513298921Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513305455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513313466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513320544Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513576535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513633233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513662413Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:12:21 ha-214000-m02 dockerd[511]: time="2024-10-04T03:12:21.513695230Z" level=info msg="containerd successfully booted in 0.026881s"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.490814669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.499609150Z" level=info msg="Loading containers: start."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.592246842Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.677306677Z" level=info msg="Loading containers: done."
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685242520Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685303867Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685339862Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.685430972Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.712195072Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:12:22 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:12:22 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:22.714589273Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737024966Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:12:23 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.737915169Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738216415Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738251321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:12:23 ha-214000-m02 dockerd[504]: time="2024-10-04T03:12:23.738263075Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:12:24 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:12:24 ha-214000-m02 dockerd[913]: time="2024-10-04T03:12:24.770552545Z" level=info msg="Starting up"
	Oct 04 03:13:24 ha-214000-m02 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:13:24 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:13:24.684996    3786 out.go:270] * 
	W1003 20:13:24.685669    3786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:13:24.748224    3786 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.609856146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.615919730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616016462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.616162060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d44e77a58bfbcd3636f77ffd81283e6b03efe9e5dc88c021442461d2d33a3a3b/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:12:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20614064fdfe19f6749b5771ff0a30a428b5230efd3bcfa55d43aa8f25ce5616/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823080888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.823785833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824198141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.824391231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862433657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.862813529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.867925615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:12:28 ha-214000 dockerd[1275]: time="2024-10-04T03:12:28.868097260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363641015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363750285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363769672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 dockerd[1275]: time="2024-10-04T03:13:28.363888443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:28 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241895c2dd1d78a28b36a50806edad320f8a1ac083d452c174d4f7bde4dd5673/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:13:34 ha-214000 cri-dockerd[1166]: time="2024-10-04T03:13:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185526110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185592857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185606660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:13:34 ha-214000 dockerd[1275]: time="2024-10-04T03:13:34.185685899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         19 minutes ago      Running             coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         19 minutes ago      Running             coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	792bd20fa10c9       6e38f40d628db                                                                                         19 minutes ago      Running             storage-provisioner       0                   a4df5305516c4       storage-provisioner
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              19 minutes ago      Running             kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         19 minutes ago      Running             kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	2e5127305b39f       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     19 minutes ago      Running             kube-vip                  0                   4d6220bdd1cdc       kube-vip-ha-214000
	95af0d749f454       6bab7719df100                                                                                         19 minutes ago      Running             kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         19 minutes ago      Running             kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         19 minutes ago      Running             kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         19 minutes ago      Running             etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:28:54 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 797946633cb845879b866bebe75be818
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    9af69b77-b29f-476b-8660-d17f40a68a69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x3 over 19m)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x3 over 19m)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                19m                kubelet          Node ha-214000 status is now: NodeReady
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m12s (x2 over 6m12s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s (x2 over 6m12s)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s (x2 over 6m12s)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                5m42s                  kubelet          Node ha-214000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.588601] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.237578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.675089] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +0.096624] systemd-fstab-generator[499]: Ignoring "noauto" option for root device
	[  +1.775649] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.309671] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.058128] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.112824] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +2.463116] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.095604] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.114522] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.134214] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +3.566077] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.057138] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.514131] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.459653] systemd-fstab-generator[1652]: Ignoring "noauto" option for root device
	[  +0.056119] kauditd_printk_skb: 70 callbacks suppressed
	[Oct 4 03:12] systemd-fstab-generator[2141]: Ignoring "noauto" option for root device
	[  +0.078043] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.010148] kauditd_printk_skb: 27 callbacks suppressed
	[ +17.525670] kauditd_printk_skb: 23 callbacks suppressed
	[Oct 4 03:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.479515Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:12:00.487114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.479633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.487300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:12:00.480184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.487761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:12:00.492362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.490499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:12:00.487170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:27:00.215476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-04T03:27:00.217042Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"1.236946ms","hash":1433174615,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2023424,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-04T03:27:00.217099Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1433174615,"revision":1514,"compact-revision":973}
	{"level":"info","ts":"2024-10-04T03:31:28.060845Z","caller":"traceutil/trace.go:171","msg":"trace[860112081] transaction","detail":"{read_only:false; response_revision:2637; number_of_response:1; }","duration":"112.489562ms","start":"2024-10-04T03:31:27.948335Z","end":"2024-10-04T03:31:28.060825Z","steps":["trace[860112081] 'process raft request'  (duration: 91.094323ms)","trace[860112081] 'compare'  (duration: 21.269614ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:31:32 up 20 min,  0 users,  load average: 0.13, 0.20, 0.18
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:30:23.498507       1 main.go:299] handling current node
	I1004 03:30:33.504932       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:33.505118       1 main.go:299] handling current node
	I1004 03:30:33.505159       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:33.505270       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:30:43.497209       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:43.497346       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:30:43.497580       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:43.497755       1 main.go:299] handling current node
	I1004 03:30:53.496402       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:53.496595       1 main.go:299] handling current node
	I1004 03:30:53.496647       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:53.496795       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:03.496468       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:03.496619       1 main.go:299] handling current node
	I1004 03:31:03.496645       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:03.496656       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:13.497200       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:13.497236       1 main.go:299] handling current node
	I1004 03:31:13.497252       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:13.497259       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:23.497508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:23.497727       1 main.go:299] handling current node
	I1004 03:31:23.497777       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:23.497873       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [95af0d749f45] <==
	I1004 03:12:01.800434       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:12:01.803349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:12:01.803694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:12:01.806063       1 controller.go:615] quota admission added evaluator for: namespaces
	I1004 03:12:01.862270       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:12:02.695251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:12:02.698954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:12:02.699302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:12:03.001263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:12:03.027584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:12:03.111487       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:12:03.115731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:12:03.116421       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:12:03.119045       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:12:03.747970       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:12:05.520326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:12:05.527528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:12:05.533597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:12:09.201571       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:12:09.477427       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:24:50.435518       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50957: use of closed network connection
	E1004 03:24:50.895229       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50965: use of closed network connection
	E1004 03:24:51.354535       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:50973: use of closed network connection
	E1004 03:24:54.778771       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51007: use of closed network connection
	E1004 03:24:54.975618       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51009: use of closed network connection
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	I1004 03:26:22.202705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:28:54.315206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:31:28.798824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	E1004 03:12:01.800728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:12:01.801233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1004 03:12:01.800153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:27:04 ha-214000 kubelet[2148]: E1004 03:27:04.977774    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:27:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:27:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:27:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:27:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:28:04 ha-214000 kubelet[2148]: E1004 03:28:04.975288    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:28:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:28:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:28:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:28:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:29:04 ha-214000 kubelet[2148]: E1004 03:29:04.973888    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:29:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:29:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:29:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:29:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:30:04 ha-214000 kubelet[2148]: E1004 03:30:04.973795    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:30:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:30:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:30:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:30:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:31:04 ha-214000 kubelet[2148]: E1004 03:31:04.972847    2148 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:31:04 ha-214000 kubelet[2148]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:31:04 ha-214000 kubelet[2148]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:31:04 ha-214000 kubelet[2148]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:31:04 ha-214000 kubelet[2148]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  7m58s (x3 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  28s (x3 over 5m43s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-214000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-214000 -v=7 --alsologtostderr
E1003 20:31:33.953157    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-214000 -v=7 --alsologtostderr: (18.903760338s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-214000 --wait=true -v=7 --alsologtostderr
E1003 20:33:01.993148    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-214000 --wait=true -v=7 --alsologtostderr: exit status 90 (1m16.089889167s)

                                                
                                                
-- stdout --
	* [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	* Restarting existing hyperkit VM for "ha-214000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:31:52.447855    4810 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:31:52.448081    4810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:31:52.448086    4810 out.go:358] Setting ErrFile to fd 2...
	I1003 20:31:52.448090    4810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:31:52.448269    4810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:31:52.449885    4810 out.go:352] Setting JSON to false
	I1003 20:31:52.477543    4810 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3682,"bootTime":1728009030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:31:52.477638    4810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:31:52.499693    4810 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:31:52.521638    4810 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:31:52.521663    4810 notify.go:220] Checking for updates...
	I1003 20:31:52.564381    4810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:31:52.585595    4810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:31:52.606352    4810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:31:52.627398    4810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:31:52.648377    4810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:31:52.670155    4810 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:31:52.670331    4810 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:31:52.671166    4810 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:52.671252    4810 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:52.683376    4810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51660
	I1003 20:31:52.683841    4810 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:52.684416    4810 main.go:141] libmachine: Using API Version  1
	I1003 20:31:52.684428    4810 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:52.684702    4810 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:52.684877    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:31:52.716478    4810 out.go:177] * Using the hyperkit driver based on existing profile
	I1003 20:31:52.758288    4810 start.go:297] selected driver: hyperkit
	I1003 20:31:52.758319    4810 start.go:901] validating driver "hyperkit" against &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:31:52.758557    4810 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:31:52.758748    4810 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:31:52.759015    4810 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:31:52.771036    4810 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:31:52.777294    4810 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:52.777317    4810 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:31:52.782265    4810 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:31:52.782301    4810 cni.go:84] Creating CNI manager for ""
	I1003 20:31:52.782348    4810 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:31:52.782415    4810 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.1
69.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:31:52.782516    4810 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:31:52.824536    4810 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:31:52.845289    4810 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:31:52.845382    4810 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:31:52.845409    4810 cache.go:56] Caching tarball of preloaded images
	I1003 20:31:52.845640    4810 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:31:52.845660    4810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:31:52.845842    4810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:31:52.846859    4810 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:31:52.847012    4810 start.go:364] duration metric: took 127.758µs to acquireMachinesLock for "ha-214000"
	I1003 20:31:52.847067    4810 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:31:52.847081    4810 fix.go:54] fixHost starting: 
	I1003 20:31:52.847493    4810 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:31:52.847523    4810 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:31:52.858994    4810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51662
	I1003 20:31:52.859358    4810 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:31:52.859782    4810 main.go:141] libmachine: Using API Version  1
	I1003 20:31:52.859806    4810 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:31:52.860065    4810 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:31:52.860200    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:31:52.860314    4810 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:31:52.860400    4810 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:52.860483    4810 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 3798
	I1003 20:31:52.861535    4810 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 3798 missing from process table
	I1003 20:31:52.861564    4810 fix.go:112] recreateIfNeeded on ha-214000: state=Stopped err=<nil>
	I1003 20:31:52.861584    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	W1003 20:31:52.861678    4810 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:31:52.904422    4810 out.go:177] * Restarting existing hyperkit VM for "ha-214000" ...
	I1003 20:31:52.927480    4810 main.go:141] libmachine: (ha-214000) Calling .Start
	I1003 20:31:52.927771    4810 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:52.927814    4810 main.go:141] libmachine: (ha-214000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:31:52.929857    4810 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 3798 missing from process table
	I1003 20:31:52.929870    4810 main.go:141] libmachine: (ha-214000) DBG | pid 3798 is in state "Stopped"
	I1003 20:31:52.929900    4810 main.go:141] libmachine: (ha-214000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid...
	I1003 20:31:52.930125    4810 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:31:53.058081    4810 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:31:53.058115    4810 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:31:53.058303    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:31:53.058386    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:31:53.058460    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:31:53.058550    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:31:53.058568    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:31:53.060188    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 DEBUG: hyperkit: Pid is 4822
	I1003 20:31:53.060541    4810 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:31:53.060564    4810 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:31:53.060664    4810 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:31:53.062661    4810 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:31:53.062779    4810 main.go:141] libmachine: (ha-214000) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:31:53.062801    4810 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff619f}
	I1003 20:31:53.062816    4810 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:69:5a:d:d4:66 ID:1,92:69:5a:d:d4:66 Lease:0x66ff6e23}
	I1003 20:31:53.062827    4810 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6afb}
	I1003 20:31:53.062835    4810 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:31:53.062857    4810 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:31:53.062929    4810 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:31:53.063641    4810 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:31:53.063834    4810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:31:53.064280    4810 machine.go:93] provisionDockerMachine start ...
	I1003 20:31:53.064290    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:31:53.064452    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:31:53.064574    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:31:53.064698    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:31:53.064828    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:31:53.064978    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:31:53.065148    4810 main.go:141] libmachine: Using SSH client type: native
	I1003 20:31:53.065359    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x40cdd00] 0x40d09e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:31:53.065368    4810 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:31:53.070978    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:31:53.139697    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:31:53.140723    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:31:53.140748    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:31:53.140757    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:31:53.140762    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:31:53.525118    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:31:53.525133    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:31:53.639619    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:31:53.639642    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:31:53.639661    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:31:53.639675    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:31:53.640602    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:31:53.640615    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:31:59.219555    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:31:59.219640    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:31:59.219648    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:31:59.243955    4810 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:31:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:32:04.124795    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:32:04.124810    4810 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:32:04.124979    4810 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:32:04.124991    4810 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:32:04.125124    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:04.125209    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:04.125295    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.125382    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.125505    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:04.125665    4810 main.go:141] libmachine: Using SSH client type: native
	I1003 20:32:04.125822    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x40cdd00] 0x40d09e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:32:04.125830    4810 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:32:04.188429    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:32:04.188449    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:04.188595    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:04.188698    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.188794    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.188892    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:04.189050    4810 main.go:141] libmachine: Using SSH client type: native
	I1003 20:32:04.189190    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x40cdd00] 0x40d09e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:32:04.189207    4810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:32:04.247890    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:32:04.247922    4810 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:32:04.247937    4810 buildroot.go:174] setting up certificates
	I1003 20:32:04.247944    4810 provision.go:84] configureAuth start
	I1003 20:32:04.247952    4810 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:32:04.248086    4810 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:32:04.248178    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:04.248259    4810 provision.go:143] copyHostCerts
	I1003 20:32:04.248287    4810 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:32:04.248363    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:32:04.248371    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:32:04.248507    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:32:04.248729    4810 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:32:04.248778    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:32:04.248783    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:32:04.248867    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:32:04.249038    4810 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:32:04.249084    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:32:04.249089    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:32:04.249174    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:32:04.249334    4810 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:32:04.386237    4810 provision.go:177] copyRemoteCerts
	I1003 20:32:04.386303    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:32:04.386319    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:04.386452    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:04.386646    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.386750    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:04.386851    4810 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:32:04.420548    4810 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:32:04.420629    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:32:04.440085    4810 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:32:04.440150    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:32:04.459472    4810 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:32:04.459542    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 20:32:04.479081    4810 provision.go:87] duration metric: took 231.16898ms to configureAuth
	I1003 20:32:04.479093    4810 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:32:04.479275    4810 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:32:04.479288    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:32:04.479428    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:04.479543    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:04.479644    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.479736    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.479842    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:04.479964    4810 main.go:141] libmachine: Using SSH client type: native
	I1003 20:32:04.480096    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x40cdd00] 0x40d09e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:32:04.480104    4810 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:32:04.532152    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:32:04.532164    4810 buildroot.go:70] root file system type: tmpfs
	I1003 20:32:04.532244    4810 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:32:04.532257    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:04.532383    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:04.532508    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.532595    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.532692    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:04.532861    4810 main.go:141] libmachine: Using SSH client type: native
	I1003 20:32:04.533002    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x40cdd00] 0x40d09e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:32:04.533047    4810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:32:04.595181    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:32:04.595202    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:04.595346    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:04.595453    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.595533    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:04.595604    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:04.595733    4810 main.go:141] libmachine: Using SSH client type: native
	I1003 20:32:04.595862    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x40cdd00] 0x40d09e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:32:04.595874    4810 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:32:06.238717    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:32:06.238731    4810 machine.go:96] duration metric: took 13.178205225s to provisionDockerMachine
	I1003 20:32:06.238746    4810 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:32:06.238753    4810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:32:06.238765    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:32:06.238976    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:32:06.238992    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:06.239091    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:06.239193    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:06.239283    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:06.239376    4810 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:32:06.276958    4810 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:32:06.281640    4810 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:32:06.281653    4810 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:32:06.281764    4810 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:32:06.281983    4810 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:32:06.281989    4810 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:32:06.282252    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:32:06.290395    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:32:06.325328    4810 start.go:296] duration metric: took 86.588689ms for postStartSetup
	I1003 20:32:06.325351    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:32:06.325551    4810 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:32:06.325564    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:06.325658    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:06.325741    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:06.325835    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:06.325921    4810 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:32:06.359162    4810 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:32:06.359230    4810 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:32:06.413163    4810 fix.go:56] duration metric: took 13.569896165s for fixHost
	I1003 20:32:06.413189    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:06.413347    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:06.413443    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:06.413536    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:06.413633    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:06.413782    4810 main.go:141] libmachine: Using SSH client type: native
	I1003 20:32:06.413927    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x40cdd00] 0x40d09e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:32:06.413934    4810 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:32:06.466419    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012726.580641788
	
	I1003 20:32:06.466437    4810 fix.go:216] guest clock: 1728012726.580641788
	I1003 20:32:06.466442    4810 fix.go:229] Guest: 2024-10-03 20:32:06.580641788 -0700 PDT Remote: 2024-10-03 20:32:06.413176 -0700 PDT m=+14.008340754 (delta=167.465788ms)
	I1003 20:32:06.466457    4810 fix.go:200] guest clock delta is within tolerance: 167.465788ms
	I1003 20:32:06.466461    4810 start.go:83] releasing machines lock for "ha-214000", held for 13.623259667s
	I1003 20:32:06.466480    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:32:06.466626    4810 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:32:06.466743    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:32:06.467086    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:32:06.467197    4810 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:32:06.467284    4810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:32:06.467314    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:06.467367    4810 ssh_runner.go:195] Run: cat /version.json
	I1003 20:32:06.467378    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:32:06.467412    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:06.467494    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:06.467510    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:32:06.467594    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:32:06.467647    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:06.467683    4810 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:32:06.467802    4810 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:32:06.467814    4810 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:32:06.495666    4810 ssh_runner.go:195] Run: systemctl --version
	I1003 20:32:06.548836    4810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:32:06.553255    4810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:32:06.553307    4810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:32:06.566171    4810 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:32:06.566183    4810 start.go:495] detecting cgroup driver to use...
	I1003 20:32:06.566287    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:32:06.584102    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:32:06.592756    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:32:06.601501    4810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:32:06.601562    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:32:06.610349    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:32:06.618974    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:32:06.627730    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:32:06.636489    4810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:32:06.645476    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:32:06.654338    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:32:06.663031    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:32:06.671779    4810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:32:06.679906    4810 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:32:06.679966    4810 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:32:06.688825    4810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:32:06.696787    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:32:06.801316    4810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:32:06.819714    4810 start.go:495] detecting cgroup driver to use...
	I1003 20:32:06.819807    4810 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:32:06.844202    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:32:06.856394    4810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:32:06.879501    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:32:06.890968    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:32:06.901120    4810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:32:06.921672    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:32:06.932192    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:32:06.947422    4810 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:32:06.950371    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:32:06.957567    4810 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:32:06.971234    4810 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:32:07.070975    4810 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:32:07.175749    4810 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:32:07.175822    4810 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:32:07.189570    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:32:07.289028    4810 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:33:08.303599    4810 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017126156s)
	I1003 20:33:08.303713    4810 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:33:08.341117    4810 out.go:201] 
	W1003 20:33:08.363082    4810 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:32:04 ha-214000 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:32:05 ha-214000 dockerd[480]: time="2024-10-04T03:32:05.000949346Z" level=info msg="Starting up"
	Oct 04 03:32:05 ha-214000 dockerd[480]: time="2024-10-04T03:32:05.001406419Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:32:05 ha-214000 dockerd[480]: time="2024-10-04T03:32:05.001973700Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=487
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.019187672Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034806001Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034828615Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034863955Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034874168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035028511Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035064921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035175397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035208974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035220775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035228718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035349449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035562732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037579616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037622655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037721389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037755986Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037897647Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037944001Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040667302Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040714870Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040728582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040738687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040748273Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040791541Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040968217Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041042189Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041075645Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041086976Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041095697Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041103923Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041112351Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041121308Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041129750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041138236Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041146653Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041154270Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041166328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041175203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041182950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041193779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041201280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041210093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041217561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041225152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041232915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041241768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041248843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041255962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041270308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041283005Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041309377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041320474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041330740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041380130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041413636Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041423847Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041432408Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041438991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041447218Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041454177Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041682475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041735394Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041760745Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041791825Z" level=info msg="containerd successfully booted in 0.023466s"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.021097315Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.060550898Z" level=info msg="Loading containers: start."
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.212676131Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.278423127Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.321962858Z" level=info msg="Loading containers: done."
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329007459Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329092832Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329141635Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329320841Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.350143002Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:32:06 ha-214000 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.350245073Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.415578860Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416437547Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416499768Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416528972Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416541346Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:32:07 ha-214000 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:32:08 ha-214000 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:32:08 ha-214000 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:32:08 ha-214000 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:32:08 ha-214000 dockerd[1163]: time="2024-10-04T03:32:08.445252128Z" level=info msg="Starting up"
	Oct 04 03:33:08 ha-214000 dockerd[1163]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:33:08 ha-214000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:33:08 ha-214000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:33:08 ha-214000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:32:04 ha-214000 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:32:05 ha-214000 dockerd[480]: time="2024-10-04T03:32:05.000949346Z" level=info msg="Starting up"
	Oct 04 03:32:05 ha-214000 dockerd[480]: time="2024-10-04T03:32:05.001406419Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:32:05 ha-214000 dockerd[480]: time="2024-10-04T03:32:05.001973700Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=487
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.019187672Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034806001Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034828615Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034863955Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.034874168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035028511Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035064921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035175397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035208974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035220775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035228718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035349449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.035562732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037579616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037622655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037721389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037755986Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037897647Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.037944001Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040667302Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040714870Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040728582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040738687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040748273Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040791541Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.040968217Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041042189Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041075645Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041086976Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041095697Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041103923Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041112351Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041121308Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041129750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041138236Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041146653Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041154270Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041166328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041175203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041182950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041193779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041201280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041210093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041217561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041225152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041232915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041241768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041248843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041255962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041270308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041283005Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041309377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041320474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041330740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041380130Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041413636Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041423847Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041432408Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041438991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041447218Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041454177Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041682475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041735394Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041760745Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:32:05 ha-214000 dockerd[487]: time="2024-10-04T03:32:05.041791825Z" level=info msg="containerd successfully booted in 0.023466s"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.021097315Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.060550898Z" level=info msg="Loading containers: start."
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.212676131Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.278423127Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.321962858Z" level=info msg="Loading containers: done."
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329007459Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329092832Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329141635Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.329320841Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.350143002Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:32:06 ha-214000 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:32:06 ha-214000 dockerd[480]: time="2024-10-04T03:32:06.350245073Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.415578860Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416437547Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416499768Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416528972Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:32:07 ha-214000 dockerd[480]: time="2024-10-04T03:32:07.416541346Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:32:07 ha-214000 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:32:08 ha-214000 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:32:08 ha-214000 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:32:08 ha-214000 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:32:08 ha-214000 dockerd[1163]: time="2024-10-04T03:32:08.445252128Z" level=info msg="Starting up"
	Oct 04 03:33:08 ha-214000 dockerd[1163]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:33:08 ha-214000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:33:08 ha-214000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:33:08 ha-214000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:33:08.363198    4810 out.go:270] * 
	* 
	W1003 20:33:08.364549    4810 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:33:08.426876    4810 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-214000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:474: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-214000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000: exit status 6 (156.764226ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:33:08.691911    4867 status.go:458] kubeconfig endpoint: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-214000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 node delete m03 -v=7 --alsologtostderr: exit status 83 (167.157322ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-214000-m02 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-214000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:33:08.763889    4872 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:33:08.764329    4872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:33:08.764335    4872 out.go:358] Setting ErrFile to fd 2...
	I1003 20:33:08.764338    4872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:33:08.764510    4872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:33:08.764866    4872 mustload.go:65] Loading cluster: ha-214000
	I1003 20:33:08.765210    4872 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:33:08.765603    4872 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:08.765639    4872 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:08.776331    4872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51692
	I1003 20:33:08.776699    4872 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:08.777098    4872 main.go:141] libmachine: Using API Version  1
	I1003 20:33:08.777118    4872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:08.777360    4872 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:08.777497    4872 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:33:08.777596    4872 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:33:08.777658    4872 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:33:08.778729    4872 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:33:08.778990    4872 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:08.779013    4872 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:08.789711    4872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51694
	I1003 20:33:08.790031    4872 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:08.790395    4872 main.go:141] libmachine: Using API Version  1
	I1003 20:33:08.790423    4872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:08.790669    4872 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:08.790810    4872 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:33:08.791213    4872 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:08.791242    4872 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:08.801798    4872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51696
	I1003 20:33:08.802122    4872 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:08.802483    4872 main.go:141] libmachine: Using API Version  1
	I1003 20:33:08.802499    4872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:08.802713    4872 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:08.802833    4872 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:33:08.802926    4872 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:33:08.803018    4872 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:33:08.804030    4872 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:33:08.825647    4872 out.go:177] * The control-plane node ha-214000-m02 host is not running: state=Stopped
	I1003 20:33:08.847350    4872 out.go:177]   To start a cluster, run: "minikube start -p ha-214000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-214000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 7 (185.232591ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`
	ha-214000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-214000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:33:08.931818    4877 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:33:08.932134    4877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:33:08.932139    4877 out.go:358] Setting ErrFile to fd 2...
	I1003 20:33:08.932143    4877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:33:08.932316    4877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:33:08.932495    4877 out.go:352] Setting JSON to false
	I1003 20:33:08.932517    4877 mustload.go:65] Loading cluster: ha-214000
	I1003 20:33:08.932550    4877 notify.go:220] Checking for updates...
	I1003 20:33:08.932856    4877 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:33:08.932875    4877 status.go:174] checking status of ha-214000 ...
	I1003 20:33:08.933289    4877 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:08.933325    4877 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:08.944578    4877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51699
	I1003 20:33:08.944917    4877 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:08.945337    4877 main.go:141] libmachine: Using API Version  1
	I1003 20:33:08.945346    4877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:08.945564    4877 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:08.945679    4877 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:33:08.945787    4877 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:33:08.945863    4877 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:33:08.946919    4877 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:33:08.946937    4877 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:33:08.947187    4877 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:08.947207    4877 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:08.957976    4877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51701
	I1003 20:33:08.958298    4877 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:08.958613    4877 main.go:141] libmachine: Using API Version  1
	I1003 20:33:08.958623    4877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:08.958940    4877 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:08.959072    4877 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:33:08.959173    4877 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:33:08.959439    4877 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:08.959465    4877 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:08.970051    4877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51703
	I1003 20:33:08.970362    4877 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:08.970714    4877 main.go:141] libmachine: Using API Version  1
	I1003 20:33:08.970729    4877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:08.970934    4877 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:08.971058    4877 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:33:08.971224    4877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:33:08.971245    4877 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:33:08.971351    4877 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:33:08.971451    4877 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:33:08.971557    4877 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:33:08.971663    4877 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:33:09.002656    4877 ssh_runner.go:195] Run: systemctl --version
	I1003 20:33:09.006841    4877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1003 20:33:09.018125    4877 status.go:458] kubeconfig endpoint: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:33:09.018148    4877 api_server.go:166] Checking apiserver status ...
	I1003 20:33:09.018201    4877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:33:09.028399    4877 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:33:09.028408    4877 status.go:463] ha-214000 apiserver status = Stopped (err=<nil>)
	I1003 20:33:09.028413    4877 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:33:09.028425    4877 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:33:09.028694    4877 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:09.028715    4877 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:09.039756    4877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51706
	I1003 20:33:09.040071    4877 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:09.040396    4877 main.go:141] libmachine: Using API Version  1
	I1003 20:33:09.040410    4877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:09.040641    4877 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:09.040748    4877 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:33:09.040830    4877 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:33:09.040907    4877 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:33:09.041930    4877 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:33:09.041976    4877 status.go:371] ha-214000-m02 host status = "Stopped" (err=<nil>)
	I1003 20:33:09.041986    4877 status.go:384] host is not running, skipping remaining checks
	I1003 20:33:09.041989    4877 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:33:09.042004    4877 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:33:09.042295    4877 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:33:09.042319    4877 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:33:09.053050    4877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51708
	I1003 20:33:09.053369    4877 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:33:09.053705    4877 main.go:141] libmachine: Using API Version  1
	I1003 20:33:09.053716    4877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:33:09.053945    4877 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:33:09.054069    4877 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:33:09.054165    4877 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:33:09.054243    4877 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:33:09.055268    4877 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid 4114 missing from process table
	I1003 20:33:09.055302    4877 status.go:371] ha-214000-m03 host status = "Stopped" (err=<nil>)
	I1003 20:33:09.055311    4877 status.go:384] host is not running, skipping remaining checks
	I1003 20:33:09.055316    4877 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000: exit status 6 (157.992533ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:33:09.203538    4884 status.go:458] kubeconfig endpoint: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-214000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:415: expected profile "ha-214000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMAC
ount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":f
alse,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetric
s\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000: exit status 6 (158.019552ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:33:09.590017    4897 status.go:458] kubeconfig endpoint: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-214000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (146s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-214000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E1003 20:38:01.993317    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-214000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (2m22.343219228s)

                                                
                                                
-- stdout --
	* [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	* Restarting existing hyperkit VM for "ha-214000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	* Enabled addons: 
	
	* Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	* Restarting existing hyperkit VM for "ha-214000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:35:48.304540    4951 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:48.304733    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304739    4951 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:48.304743    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304927    4951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:35:48.306332    4951 out.go:352] Setting JSON to false
	I1003 20:35:48.334066    4951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3918,"bootTime":1728009030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:35:48.334215    4951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:35:48.356076    4951 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:35:48.398703    4951 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:35:48.398800    4951 notify.go:220] Checking for updates...
	I1003 20:35:48.442667    4951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:35:48.463910    4951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:35:48.485340    4951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:35:48.506572    4951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:35:48.527740    4951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:35:48.550278    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:48.551029    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.551094    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.563226    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51755
	I1003 20:35:48.563804    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.564307    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.564319    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.564662    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.564822    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.565117    4951 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:35:48.565435    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.565487    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.576762    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51757
	I1003 20:35:48.577263    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.577677    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.577713    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.578069    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.578299    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.610723    4951 out.go:177] * Using the hyperkit driver based on existing profile
	I1003 20:35:48.652521    4951 start.go:297] selected driver: hyperkit
	I1003 20:35:48.652550    4951 start.go:901] validating driver "hyperkit" against &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.652818    4951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:35:48.653002    4951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.653249    4951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:35:48.665237    4951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:35:48.671535    4951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.671574    4951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:35:48.676549    4951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:35:48.676588    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:35:48.676625    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:35:48.676690    4951 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.1
69.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.676815    4951 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.698601    4951 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:35:48.740785    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:35:48.740857    4951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:35:48.740884    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:35:48.741146    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:35:48.741164    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:35:48.741343    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.742237    4951 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:35:48.742380    4951 start.go:364] duration metric: took 119.499µs to acquireMachinesLock for "ha-214000"
	I1003 20:35:48.742414    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:35:48.742428    4951 fix.go:54] fixHost starting: 
	I1003 20:35:48.742857    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.742889    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.754302    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51759
	I1003 20:35:48.754621    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.754990    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.755005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.755241    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.755370    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.755459    4951 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:35:48.755544    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.755632    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:35:48.756648    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.756678    4951 fix.go:112] recreateIfNeeded on ha-214000: state=Stopped err=<nil>
	I1003 20:35:48.756695    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	W1003 20:35:48.756784    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:35:48.778933    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000" ...
	I1003 20:35:48.800930    4951 main.go:141] libmachine: (ha-214000) Calling .Start
	I1003 20:35:48.801199    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.801247    4951 main.go:141] libmachine: (ha-214000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:35:48.803311    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.803325    4951 main.go:141] libmachine: (ha-214000) DBG | pid 4822 is in state "Stopped"
	I1003 20:35:48.803341    4951 main.go:141] libmachine: (ha-214000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid...
	I1003 20:35:48.803610    4951 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:35:48.922193    4951 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:35:48.922226    4951 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:35:48.922379    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922424    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922546    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:35:48.922605    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:35:48.922622    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:35:48.924313    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Pid is 4964
	I1003 20:35:48.924838    4951 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:35:48.924852    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.924911    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4964
	I1003 20:35:48.927353    4951 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:35:48.927405    4951 main.go:141] libmachine: (ha-214000) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:35:48.927432    4951 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6fc2}
	I1003 20:35:48.927443    4951 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:35:48.927454    4951 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:35:48.927543    4951 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:35:48.928494    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:35:48.928701    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.929276    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:35:48.929289    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.929410    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:35:48.929535    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:35:48.929649    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929777    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929900    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:35:48.930094    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:35:48.930303    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:35:48.930312    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:35:48.935400    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:35:48.990306    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:35:48.991238    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:48.991260    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:48.991278    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:48.991294    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.374490    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:35:49.374504    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:35:49.489812    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:49.489840    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:49.489854    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:49.489865    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.490699    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:35:49.490709    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:35:55.079541    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:35:55.079635    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:35:55.079652    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:35:55.103846    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:36:23.994265    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:36:23.994281    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994427    4951 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:36:23.994438    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994568    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:23.994676    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:23.994778    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994888    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994989    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:23.995134    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:23.995292    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:23.995301    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:36:24.061419    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:36:24.061438    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.061566    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.061665    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061761    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061855    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.062009    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.062160    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.062171    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:36:24.123229    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:36:24.123250    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:36:24.123267    4951 buildroot.go:174] setting up certificates
	I1003 20:36:24.123274    4951 provision.go:84] configureAuth start
	I1003 20:36:24.123280    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:24.123436    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:24.123534    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.123640    4951 provision.go:143] copyHostCerts
	I1003 20:36:24.123670    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123751    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:36:24.123759    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123933    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:36:24.124159    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124208    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:36:24.124213    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124299    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:36:24.124456    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124504    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:36:24.124508    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124593    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:36:24.124759    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:36:24.242470    4951 provision.go:177] copyRemoteCerts
	I1003 20:36:24.242536    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:36:24.242550    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.242680    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.242779    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.242882    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.242976    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:24.278106    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:36:24.278181    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:36:24.297749    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:36:24.297814    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 20:36:24.317337    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:36:24.317417    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:36:24.337360    4951 provision.go:87] duration metric: took 214.07513ms to configureAuth
	I1003 20:36:24.337374    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:36:24.337568    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:24.337582    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:24.337722    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.337811    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.337893    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.337973    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.338066    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.338199    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.338322    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.338329    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:36:24.392942    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:36:24.392953    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:36:24.393026    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:36:24.393038    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.393177    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.393275    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393375    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393458    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.393607    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.393746    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.393789    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:36:24.457890    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:36:24.457915    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.458049    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.458145    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458223    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458324    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.458459    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.458606    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.458617    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:36:26.102134    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:36:26.102148    4951 machine.go:96] duration metric: took 37.172864722s to provisionDockerMachine
	I1003 20:36:26.102162    4951 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:36:26.102174    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:36:26.102184    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.102399    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:36:26.102415    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.102503    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.102602    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.102703    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.102803    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.136711    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:36:26.139862    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:36:26.139874    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:36:26.139975    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:36:26.140193    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:36:26.140200    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:36:26.140451    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:36:26.147627    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:26.167774    4951 start.go:296] duration metric: took 65.6041ms for postStartSetup
	I1003 20:36:26.167794    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.167968    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:36:26.167979    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.168089    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.168182    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.168259    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.168350    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.202842    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:36:26.202914    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:36:26.255647    4951 fix.go:56] duration metric: took 37.513223093s for fixHost
	I1003 20:36:26.255670    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.255816    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.255918    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256012    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256105    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.256247    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:26.256399    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:26.256406    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:36:26.311780    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012986.433392977
	
	I1003 20:36:26.311792    4951 fix.go:216] guest clock: 1728012986.433392977
	I1003 20:36:26.311797    4951 fix.go:229] Guest: 2024-10-03 20:36:26.433392977 -0700 PDT Remote: 2024-10-03 20:36:26.25566 -0700 PDT m=+37.989104353 (delta=177.732977ms)
	I1003 20:36:26.311814    4951 fix.go:200] guest clock delta is within tolerance: 177.732977ms
	I1003 20:36:26.311818    4951 start.go:83] releasing machines lock for "ha-214000", held for 37.569431066s
	I1003 20:36:26.311838    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.311964    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:26.312074    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312353    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312465    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312560    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:36:26.312588    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312635    4951 ssh_runner.go:195] Run: cat /version.json
	I1003 20:36:26.312646    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312690    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312745    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312781    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312825    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312873    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.312925    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.313009    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.313022    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.345222    4951 ssh_runner.go:195] Run: systemctl --version
	I1003 20:36:26.396121    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:36:26.401139    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:36:26.401189    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:36:26.413838    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:36:26.413851    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.413956    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.430665    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:36:26.439518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:36:26.448241    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:36:26.448295    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:36:26.457135    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.465984    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:36:26.474764    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.483576    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:36:26.492518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:36:26.501284    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:36:26.510114    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:36:26.518992    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:36:26.527133    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:36:26.527188    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:36:26.536233    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:36:26.544367    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:26.641761    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:36:26.661796    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.661912    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:36:26.678816    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.689242    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:36:26.701530    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.713140    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.724511    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:36:26.748353    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.759647    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.774287    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:36:26.777216    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:36:26.785211    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:36:26.800364    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:36:26.895359    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:36:27.004148    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:36:27.004239    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:36:27.018268    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:27.118971    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:36:29.441016    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.322026405s)
	I1003 20:36:29.441097    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:36:29.451786    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.462092    4951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:36:29.564537    4951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:36:29.669649    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.781720    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:36:29.795175    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.806194    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.917885    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:36:29.986582    4951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:36:29.986686    4951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:36:29.991213    4951 start.go:563] Will wait 60s for crictl version
	I1003 20:36:29.991273    4951 ssh_runner.go:195] Run: which crictl
	I1003 20:36:29.994306    4951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:36:30.019989    4951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:36:30.020072    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.036824    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.075524    4951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:36:30.075569    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:30.076023    4951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:36:30.080492    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.091206    4951 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:36:30.091284    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:30.091356    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.103771    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.103786    4951 docker.go:615] Images already preloaded, skipping extraction
	I1003 20:36:30.103870    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.126324    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.126343    4951 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:36:30.126351    4951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:36:30.126423    4951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:36:30.126505    4951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:36:30.165944    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:36:30.165958    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:36:30.165970    4951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:36:30.165987    4951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:36:30.166068    4951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:36:30.166080    4951 kube-vip.go:115] generating kube-vip config ...
	I1003 20:36:30.166149    4951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:36:30.180124    4951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:36:30.180189    4951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:36:30.180256    4951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:36:30.189222    4951 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:36:30.189287    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:36:30.198523    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:36:30.212259    4951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:36:30.225613    4951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:36:30.239086    4951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1003 20:36:30.252640    4951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:36:30.255560    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.265017    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:30.361055    4951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:36:30.373903    4951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:36:30.373915    4951 certs.go:194] generating shared ca certs ...
	I1003 20:36:30.373925    4951 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.374133    4951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:36:30.374229    4951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:36:30.374245    4951 certs.go:256] generating profile certs ...
	I1003 20:36:30.374372    4951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:36:30.374395    4951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9
	I1003 20:36:30.374412    4951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1003 20:36:30.510048    4951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 ...
	I1003 20:36:30.510064    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9: {Name:mkec630c178c10067131af2c5f3c9dd0e1fb1860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510503    4951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 ...
	I1003 20:36:30.510513    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9: {Name:mk3eade5c23e406463c386755ec0dc38e869ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510763    4951 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:36:30.511004    4951 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:36:30.511276    4951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:36:30.511286    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:36:30.511308    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:36:30.511328    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:36:30.511347    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:36:30.511373    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:36:30.511393    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:36:30.511411    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:36:30.511428    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:36:30.511527    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:36:30.511580    4951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:36:30.511594    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:36:30.511627    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:36:30.511660    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:36:30.511688    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:36:30.511757    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:30.511791    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.511811    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.511829    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.512286    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:36:30.547800    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:36:30.588463    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:36:30.624659    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:36:30.646082    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:36:30.665519    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:36:30.684966    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:36:30.704971    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:36:30.724730    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:36:30.744135    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:36:30.763735    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:36:30.782963    4951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:36:30.796275    4951 ssh_runner.go:195] Run: openssl version
	I1003 20:36:30.800456    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:36:30.808784    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812168    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812211    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.816317    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:36:30.824743    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:36:30.833176    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836568    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836613    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.840895    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:36:30.849202    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:36:30.857643    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861134    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861184    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.865411    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:36:30.873865    4951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:36:30.877389    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:36:30.881788    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:36:30.886088    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:36:30.890422    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:36:30.894596    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:36:30.898773    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:36:30.902881    4951 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:36:30.902998    4951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:36:30.915213    4951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:36:30.923319    4951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:36:30.923331    4951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:36:30.923384    4951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:36:30.930635    4951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:36:30.930978    4951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.931055    4951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1440/kubeconfig needs updating (will repair): [kubeconfig missing "ha-214000" cluster setting kubeconfig missing "ha-214000" context setting]
	I1003 20:36:30.931232    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.931928    4951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.932136    4951 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd994f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:36:30.932465    4951 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:36:30.932658    4951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:36:30.939898    4951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1003 20:36:30.939909    4951 kubeadm.go:597] duration metric: took 16.574315ms to restartPrimaryControlPlane
	I1003 20:36:30.939914    4951 kubeadm.go:394] duration metric: took 37.038509ms to StartCluster
	I1003 20:36:30.939939    4951 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940028    4951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.940366    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940584    4951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:36:30.940597    4951 start.go:241] waiting for startup goroutines ...
	I1003 20:36:30.940605    4951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:36:30.940715    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:30.982685    4951 out.go:177] * Enabled addons: 
	I1003 20:36:31.003752    4951 addons.go:510] duration metric: took 63.132383ms for enable addons: enabled=[]
	I1003 20:36:31.003791    4951 start.go:246] waiting for cluster config update ...
	I1003 20:36:31.003802    4951 start.go:255] writing updated cluster config ...
	I1003 20:36:31.026641    4951 out.go:201] 
	I1003 20:36:31.047648    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:31.047721    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.069716    4951 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:36:31.111550    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:31.111584    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:36:31.111814    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:36:31.111847    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:36:31.111978    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.113032    4951 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:36:31.113184    4951 start.go:364] duration metric: took 124.813µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:36:31.113203    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:36:31.113208    4951 fix.go:54] fixHost starting: m02
	I1003 20:36:31.113580    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:36:31.113606    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:36:31.125064    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I1003 20:36:31.125517    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:36:31.125993    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:36:31.126005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:36:31.126252    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:36:31.126414    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.126604    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:36:31.126798    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.126890    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:36:31.127965    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:36:31.127999    4951 fix.go:112] recreateIfNeeded on ha-214000-m02: state=Stopped err=<nil>
	I1003 20:36:31.128009    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	W1003 20:36:31.128129    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:36:31.170879    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000-m02" ...
	I1003 20:36:31.191480    4951 main.go:141] libmachine: (ha-214000-m02) Calling .Start
	I1003 20:36:31.191791    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.191820    4951 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:36:31.191892    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:36:31.219578    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:36:31.219600    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:36:31.219761    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219849    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:36:31.219889    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:36:31.219902    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:36:31.221267    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Pid is 4978
	I1003 20:36:31.221656    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:36:31.221669    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.221749    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4978
	I1003 20:36:31.222942    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:36:31.223055    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:36:31.223074    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 20:36:31.223092    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff619f}
	I1003 20:36:31.223117    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:36:31.223134    4951 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:36:31.223155    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:36:31.223858    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:36:31.224037    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.224458    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:36:31.224468    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.224583    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:36:31.224679    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:36:31.224777    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.224929    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.225026    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:36:31.225183    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:31.225340    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:36:31.225347    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:36:31.232364    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:36:31.241337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:36:31.242541    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.242561    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.242572    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.242585    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.630094    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:36:31.630110    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:36:31.744778    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.744796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.744827    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.744846    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.745666    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:36:31.745681    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:36:37.337247    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:36:37.337337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:36:37.337350    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:36:37.361028    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:37:06.292112    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:37:06.292127    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292262    4951 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:37:06.292277    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292374    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.292454    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.292532    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292617    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292696    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.292835    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.292968    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.292976    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:37:06.362584    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:37:06.362599    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.362740    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.362851    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.362945    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.363048    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.363204    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.363366    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.363377    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:37:06.429246    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:37:06.429262    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:37:06.429275    4951 buildroot.go:174] setting up certificates
	I1003 20:37:06.429281    4951 provision.go:84] configureAuth start
	I1003 20:37:06.429287    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.429430    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:06.429529    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.429617    4951 provision.go:143] copyHostCerts
	I1003 20:37:06.429649    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429696    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:37:06.429701    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429820    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:37:06.430049    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430079    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:37:06.430084    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430193    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:37:06.430369    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430399    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:37:06.430404    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430485    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:37:06.430651    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:37:06.504641    4951 provision.go:177] copyRemoteCerts
	I1003 20:37:06.504702    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:37:06.504733    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.504884    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.504988    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.505086    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.505168    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:06.541867    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:37:06.541936    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:37:06.560930    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:37:06.560992    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:37:06.579917    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:37:06.579984    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:37:06.599634    4951 provision.go:87] duration metric: took 170.34603ms to configureAuth
	I1003 20:37:06.599649    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:37:06.599816    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:37:06.599829    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:06.599963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.600044    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.600140    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600213    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600306    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.600434    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.600557    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.600564    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:37:06.660138    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:37:06.660150    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:37:06.660232    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:37:06.660242    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.660378    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.660498    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660607    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660708    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.660861    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.661001    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.661049    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:37:06.728946    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:37:06.728963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.729096    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.729209    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729384    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.729544    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.729682    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.729693    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:37:08.289911    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:37:08.289925    4951 machine.go:96] duration metric: took 37.065461315s to provisionDockerMachine
	I1003 20:37:08.289933    4951 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:37:08.289944    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:37:08.289954    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.290150    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:37:08.290163    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.290256    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.290347    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.290425    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.290523    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.325637    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:37:08.328747    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:37:08.328757    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:37:08.328838    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:37:08.328975    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:37:08.328981    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:37:08.329139    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:37:08.336279    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:37:08.355765    4951 start.go:296] duration metric: took 65.822719ms for postStartSetup
	I1003 20:37:08.355783    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.355979    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:37:08.355992    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.356088    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.356171    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.356261    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.356337    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.391155    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:37:08.391224    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:37:08.443555    4951 fix.go:56] duration metric: took 37.330343063s for fixHost
	I1003 20:37:08.443608    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.443871    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.444091    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.444747    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:08.444947    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:08.444959    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:37:08.504053    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013028.627108120
	
	I1003 20:37:08.504066    4951 fix.go:216] guest clock: 1728013028.627108120
	I1003 20:37:08.504071    4951 fix.go:229] Guest: 2024-10-03 20:37:08.62710812 -0700 PDT Remote: 2024-10-03 20:37:08.443578 -0700 PDT m=+80.177024984 (delta=183.53012ms)
	I1003 20:37:08.504082    4951 fix.go:200] guest clock delta is within tolerance: 183.53012ms
	I1003 20:37:08.504087    4951 start.go:83] releasing machines lock for "ha-214000-m02", held for 37.390896714s
	I1003 20:37:08.504111    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.504258    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:08.525607    4951 out.go:177] * Found network options:
	I1003 20:37:08.567619    4951 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:37:08.588274    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.588315    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589205    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589467    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589610    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:37:08.589649    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:37:08.589687    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.589812    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:37:08.589832    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.589864    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590034    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590064    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590259    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590278    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590517    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.590537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590701    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:37:08.623322    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:37:08.623398    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:37:08.670987    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:37:08.671009    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.671107    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:08.687184    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:37:08.696174    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:37:08.705216    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:37:08.705268    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:37:08.714371    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.723383    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:37:08.732289    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.741295    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:37:08.750471    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:37:08.759323    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:37:08.768482    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:37:08.777704    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:37:08.785806    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:37:08.785866    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:37:08.794894    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:37:08.803171    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:08.897940    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:37:08.916833    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.916918    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:37:08.930156    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.942286    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:37:08.960158    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.971885    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:08.982659    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:37:08.999726    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:09.010351    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:09.025433    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:37:09.028502    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:37:09.035822    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:37:09.049466    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:37:09.162468    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:37:09.273558    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:37:09.273582    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:37:09.288188    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:09.384897    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:38:10.406862    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021950572s)
	I1003 20:38:10.406948    4951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:38:10.444120    4951 out.go:201] 
	W1003 20:38:10.464959    4951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:37:06 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391345461Z" level=info msg="Starting up"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391833106Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.395520305Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.412871636Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427882861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427981520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428050653Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428085226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428277072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428327604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428478894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428520070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428552138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428580964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428720722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428931280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430522141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430571354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430698188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430740032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430878079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430929217Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431351881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431440610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431485738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431519039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431551337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431619359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431825238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431902729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431941069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431978377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432012357Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432042063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432070459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432099321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432133473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432169855Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432202720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432268312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432315741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432351145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432383859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432414347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432447070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432476073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432510884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432548105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432578396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432608431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432640682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432669603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432698487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432729184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432768850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432801425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432829061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432911216Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432958882Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432989050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433017196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433045319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433074497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433102613Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433279017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433339149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433390358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433425703Z" level=info msg="containerd successfully booted in 0.021412s"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.415071774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.421056219Z" level=info msg="Loading containers: start."
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.500314931Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.331296883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.376605057Z" level=info msg="Loading containers: done."
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387546240Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387606581Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387647157Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387769053Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411526135Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411682523Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:37:08 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527035720Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:37:09 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527893788Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528149338Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528188105Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528221468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:10 ha-214000-m02 dockerd[929]: time="2024-10-04T03:37:10.559000347Z" level=info msg="Starting up"
	Oct 04 03:38:10 ha-214000-m02 dockerd[929]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:37:06 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391345461Z" level=info msg="Starting up"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391833106Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.395520305Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.412871636Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427882861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427981520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428050653Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428085226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428277072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428327604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428478894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428520070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428552138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428580964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428720722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428931280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430522141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430571354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430698188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430740032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430878079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430929217Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431351881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431440610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431485738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431519039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431551337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431619359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431825238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431902729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431941069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431978377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432012357Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432042063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432070459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432099321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432133473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432169855Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432202720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432268312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432315741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432351145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432383859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432414347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432447070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432476073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432510884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432548105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432578396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432608431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432640682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432669603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432698487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432729184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432768850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432801425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432829061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432911216Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432958882Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432989050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433017196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433045319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433074497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433102613Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433279017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433339149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433390358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433425703Z" level=info msg="containerd successfully booted in 0.021412s"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.415071774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.421056219Z" level=info msg="Loading containers: start."
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.500314931Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.331296883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.376605057Z" level=info msg="Loading containers: done."
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387546240Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387606581Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387647157Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387769053Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411526135Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411682523Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:37:08 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527035720Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:37:09 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527893788Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528149338Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528188105Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528221468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:10 ha-214000-m02 dockerd[929]: time="2024-10-04T03:37:10.559000347Z" level=info msg="Starting up"
	Oct 04 03:38:10 ha-214000-m02 dockerd[929]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:38:10.465066    4951 out.go:270] * 
	* 
	W1003 20:38:10.466299    4951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:38:10.543824    4951 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-214000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (3.055681249s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node start m02 -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000 -v=7               | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-214000 -v=7                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT | 03 Oct 24 20:31 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	| node    | ha-214000 node delete m03 -v=7       | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-214000 stop -v=7                  | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT | 03 Oct 24 20:35 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true             | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:35 PDT |                     |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=hyperkit                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:35:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:35:48.304540    4951 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:48.304733    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304739    4951 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:48.304743    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304927    4951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:35:48.306332    4951 out.go:352] Setting JSON to false
	I1003 20:35:48.334066    4951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3918,"bootTime":1728009030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:35:48.334215    4951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:35:48.356076    4951 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:35:48.398703    4951 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:35:48.398800    4951 notify.go:220] Checking for updates...
	I1003 20:35:48.442667    4951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:35:48.463910    4951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:35:48.485340    4951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:35:48.506572    4951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:35:48.527740    4951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:35:48.550278    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:48.551029    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.551094    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.563226    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51755
	I1003 20:35:48.563804    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.564307    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.564319    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.564662    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.564822    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.565117    4951 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:35:48.565435    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.565487    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.576762    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51757
	I1003 20:35:48.577263    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.577677    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.577713    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.578069    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.578299    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.610723    4951 out.go:177] * Using the hyperkit driver based on existing profile
	I1003 20:35:48.652521    4951 start.go:297] selected driver: hyperkit
	I1003 20:35:48.652550    4951 start.go:901] validating driver "hyperkit" against &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.652818    4951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:35:48.653002    4951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.653249    4951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:35:48.665237    4951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:35:48.671535    4951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.671574    4951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:35:48.676549    4951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:35:48.676588    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:35:48.676625    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:35:48.676690    4951 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.1
69.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.676815    4951 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.698601    4951 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:35:48.740785    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:35:48.740857    4951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:35:48.740884    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:35:48.741146    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:35:48.741164    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:35:48.741343    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.742237    4951 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:35:48.742380    4951 start.go:364] duration metric: took 119.499µs to acquireMachinesLock for "ha-214000"
	I1003 20:35:48.742414    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:35:48.742428    4951 fix.go:54] fixHost starting: 
	I1003 20:35:48.742857    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.742889    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.754302    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51759
	I1003 20:35:48.754621    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.754990    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.755005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.755241    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.755370    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.755459    4951 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:35:48.755544    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.755632    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:35:48.756648    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.756678    4951 fix.go:112] recreateIfNeeded on ha-214000: state=Stopped err=<nil>
	I1003 20:35:48.756695    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	W1003 20:35:48.756784    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:35:48.778933    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000" ...
	I1003 20:35:48.800930    4951 main.go:141] libmachine: (ha-214000) Calling .Start
	I1003 20:35:48.801199    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.801247    4951 main.go:141] libmachine: (ha-214000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:35:48.803311    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.803325    4951 main.go:141] libmachine: (ha-214000) DBG | pid 4822 is in state "Stopped"
	I1003 20:35:48.803341    4951 main.go:141] libmachine: (ha-214000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid...
	I1003 20:35:48.803610    4951 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:35:48.922193    4951 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:35:48.922226    4951 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:35:48.922379    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922424    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922546    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:35:48.922605    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:35:48.922622    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:35:48.924313    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Pid is 4964
	I1003 20:35:48.924838    4951 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:35:48.924852    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.924911    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4964
	I1003 20:35:48.927353    4951 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:35:48.927405    4951 main.go:141] libmachine: (ha-214000) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:35:48.927432    4951 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6fc2}
	I1003 20:35:48.927443    4951 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:35:48.927454    4951 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:35:48.927543    4951 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:35:48.928494    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:35:48.928701    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.929276    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:35:48.929289    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.929410    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:35:48.929535    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:35:48.929649    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929777    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929900    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:35:48.930094    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:35:48.930303    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:35:48.930312    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:35:48.935400    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:35:48.990306    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:35:48.991238    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:48.991260    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:48.991278    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:48.991294    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.374490    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:35:49.374504    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:35:49.489812    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:49.489840    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:49.489854    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:49.489865    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.490699    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:35:49.490709    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:35:55.079541    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:35:55.079635    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:35:55.079652    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:35:55.103846    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:36:23.994265    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:36:23.994281    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994427    4951 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:36:23.994438    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994568    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:23.994676    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:23.994778    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994888    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994989    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:23.995134    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:23.995292    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:23.995301    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:36:24.061419    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:36:24.061438    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.061566    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.061665    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061761    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061855    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.062009    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.062160    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.062171    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:36:24.123229    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:36:24.123250    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:36:24.123267    4951 buildroot.go:174] setting up certificates
	I1003 20:36:24.123274    4951 provision.go:84] configureAuth start
	I1003 20:36:24.123280    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:24.123436    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:24.123534    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.123640    4951 provision.go:143] copyHostCerts
	I1003 20:36:24.123670    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123751    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:36:24.123759    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123933    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:36:24.124159    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124208    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:36:24.124213    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124299    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:36:24.124456    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124504    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:36:24.124508    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124593    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:36:24.124759    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:36:24.242470    4951 provision.go:177] copyRemoteCerts
	I1003 20:36:24.242536    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:36:24.242550    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.242680    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.242779    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.242882    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.242976    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:24.278106    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:36:24.278181    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:36:24.297749    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:36:24.297814    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 20:36:24.317337    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:36:24.317417    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:36:24.337360    4951 provision.go:87] duration metric: took 214.07513ms to configureAuth
	I1003 20:36:24.337374    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:36:24.337568    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:24.337582    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:24.337722    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.337811    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.337893    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.337973    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.338066    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.338199    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.338322    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.338329    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:36:24.392942    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:36:24.392953    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:36:24.393026    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:36:24.393038    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.393177    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.393275    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393375    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393458    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.393607    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.393746    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.393789    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:36:24.457890    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:36:24.457915    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.458049    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.458145    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458223    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458324    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.458459    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.458606    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.458617    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:36:26.102134    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:36:26.102148    4951 machine.go:96] duration metric: took 37.172864722s to provisionDockerMachine
	I1003 20:36:26.102162    4951 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:36:26.102174    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:36:26.102184    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.102399    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:36:26.102415    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.102503    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.102602    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.102703    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.102803    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.136711    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:36:26.139862    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:36:26.139874    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:36:26.139975    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:36:26.140193    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:36:26.140200    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:36:26.140451    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:36:26.147627    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:26.167774    4951 start.go:296] duration metric: took 65.6041ms for postStartSetup
	I1003 20:36:26.167794    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.167968    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:36:26.167979    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.168089    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.168182    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.168259    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.168350    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.202842    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:36:26.202914    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:36:26.255647    4951 fix.go:56] duration metric: took 37.513223093s for fixHost
	I1003 20:36:26.255670    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.255816    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.255918    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256012    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256105    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.256247    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:26.256399    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:26.256406    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:36:26.311780    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012986.433392977
	
	I1003 20:36:26.311792    4951 fix.go:216] guest clock: 1728012986.433392977
	I1003 20:36:26.311797    4951 fix.go:229] Guest: 2024-10-03 20:36:26.433392977 -0700 PDT Remote: 2024-10-03 20:36:26.25566 -0700 PDT m=+37.989104353 (delta=177.732977ms)
	I1003 20:36:26.311814    4951 fix.go:200] guest clock delta is within tolerance: 177.732977ms
	I1003 20:36:26.311818    4951 start.go:83] releasing machines lock for "ha-214000", held for 37.569431066s
	I1003 20:36:26.311838    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.311964    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:26.312074    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312353    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312465    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312560    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:36:26.312588    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312635    4951 ssh_runner.go:195] Run: cat /version.json
	I1003 20:36:26.312646    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312690    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312745    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312781    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312825    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312873    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.312925    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.313009    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.313022    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.345222    4951 ssh_runner.go:195] Run: systemctl --version
	I1003 20:36:26.396121    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:36:26.401139    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:36:26.401189    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:36:26.413838    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:36:26.413851    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.413956    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.430665    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:36:26.439518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:36:26.448241    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:36:26.448295    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:36:26.457135    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.465984    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:36:26.474764    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.483576    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:36:26.492518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:36:26.501284    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:36:26.510114    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:36:26.518992    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:36:26.527133    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:36:26.527188    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:36:26.536233    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:36:26.544367    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:26.641761    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:36:26.661796    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.661912    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:36:26.678816    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.689242    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:36:26.701530    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.713140    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.724511    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:36:26.748353    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.759647    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.774287    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:36:26.777216    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:36:26.785211    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:36:26.800364    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:36:26.895359    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:36:27.004148    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:36:27.004239    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:36:27.018268    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:27.118971    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:36:29.441016    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.322026405s)
	I1003 20:36:29.441097    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:36:29.451786    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.462092    4951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:36:29.564537    4951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:36:29.669649    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.781720    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:36:29.795175    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.806194    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.917885    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:36:29.986582    4951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:36:29.986686    4951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:36:29.991213    4951 start.go:563] Will wait 60s for crictl version
	I1003 20:36:29.991273    4951 ssh_runner.go:195] Run: which crictl
	I1003 20:36:29.994306    4951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:36:30.019989    4951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:36:30.020072    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.036824    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.075524    4951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:36:30.075569    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:30.076023    4951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:36:30.080492    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.091206    4951 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:36:30.091284    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:30.091356    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.103771    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.103786    4951 docker.go:615] Images already preloaded, skipping extraction
	I1003 20:36:30.103870    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.126324    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.126343    4951 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:36:30.126351    4951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:36:30.126423    4951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:36:30.126505    4951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:36:30.165944    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:36:30.165958    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:36:30.165970    4951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:36:30.165987    4951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:36:30.166068    4951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:36:30.166080    4951 kube-vip.go:115] generating kube-vip config ...
	I1003 20:36:30.166149    4951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:36:30.180124    4951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:36:30.180189    4951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:36:30.180256    4951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:36:30.189222    4951 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:36:30.189287    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:36:30.198523    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:36:30.212259    4951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:36:30.225613    4951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:36:30.239086    4951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1003 20:36:30.252640    4951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:36:30.255560    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.265017    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:30.361055    4951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:36:30.373903    4951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:36:30.373915    4951 certs.go:194] generating shared ca certs ...
	I1003 20:36:30.373925    4951 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.374133    4951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:36:30.374229    4951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:36:30.374245    4951 certs.go:256] generating profile certs ...
	I1003 20:36:30.374372    4951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:36:30.374395    4951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9
	I1003 20:36:30.374412    4951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1003 20:36:30.510048    4951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 ...
	I1003 20:36:30.510064    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9: {Name:mkec630c178c10067131af2c5f3c9dd0e1fb1860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510503    4951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 ...
	I1003 20:36:30.510513    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9: {Name:mk3eade5c23e406463c386755ec0dc38e869ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510763    4951 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:36:30.511004    4951 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:36:30.511276    4951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:36:30.511286    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:36:30.511308    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:36:30.511328    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:36:30.511347    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:36:30.511373    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:36:30.511393    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:36:30.511411    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:36:30.511428    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:36:30.511527    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:36:30.511580    4951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:36:30.511594    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:36:30.511627    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:36:30.511660    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:36:30.511688    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:36:30.511757    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:30.511791    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.511811    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.511829    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.512286    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:36:30.547800    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:36:30.588463    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:36:30.624659    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:36:30.646082    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:36:30.665519    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:36:30.684966    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:36:30.704971    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:36:30.724730    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:36:30.744135    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:36:30.763735    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:36:30.782963    4951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:36:30.796275    4951 ssh_runner.go:195] Run: openssl version
	I1003 20:36:30.800456    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:36:30.808784    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812168    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812211    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.816317    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:36:30.824743    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:36:30.833176    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836568    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836613    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.840895    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:36:30.849202    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:36:30.857643    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861134    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861184    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.865411    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:36:30.873865    4951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:36:30.877389    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:36:30.881788    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:36:30.886088    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:36:30.890422    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:36:30.894596    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:36:30.898773    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:36:30.902881    4951 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:36:30.902998    4951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:36:30.915213    4951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:36:30.923319    4951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:36:30.923331    4951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:36:30.923384    4951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:36:30.930635    4951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:36:30.930978    4951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.931055    4951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1440/kubeconfig needs updating (will repair): [kubeconfig missing "ha-214000" cluster setting kubeconfig missing "ha-214000" context setting]
	I1003 20:36:30.931232    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.931928    4951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.932136    4951 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd994f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:36:30.932465    4951 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:36:30.932658    4951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:36:30.939898    4951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1003 20:36:30.939909    4951 kubeadm.go:597] duration metric: took 16.574315ms to restartPrimaryControlPlane
	I1003 20:36:30.939914    4951 kubeadm.go:394] duration metric: took 37.038509ms to StartCluster
	I1003 20:36:30.939939    4951 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940028    4951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.940366    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940584    4951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:36:30.940597    4951 start.go:241] waiting for startup goroutines ...
	I1003 20:36:30.940605    4951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:36:30.940715    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:30.982685    4951 out.go:177] * Enabled addons: 
	I1003 20:36:31.003752    4951 addons.go:510] duration metric: took 63.132383ms for enable addons: enabled=[]
	I1003 20:36:31.003791    4951 start.go:246] waiting for cluster config update ...
	I1003 20:36:31.003802    4951 start.go:255] writing updated cluster config ...
	I1003 20:36:31.026641    4951 out.go:201] 
	I1003 20:36:31.047648    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:31.047721    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.069716    4951 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:36:31.111550    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:31.111584    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:36:31.111814    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:36:31.111847    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:36:31.111978    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.113032    4951 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:36:31.113184    4951 start.go:364] duration metric: took 124.813µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:36:31.113203    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:36:31.113208    4951 fix.go:54] fixHost starting: m02
	I1003 20:36:31.113580    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:36:31.113606    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:36:31.125064    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I1003 20:36:31.125517    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:36:31.125993    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:36:31.126005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:36:31.126252    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:36:31.126414    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.126604    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:36:31.126798    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.126890    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:36:31.127965    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:36:31.127999    4951 fix.go:112] recreateIfNeeded on ha-214000-m02: state=Stopped err=<nil>
	I1003 20:36:31.128009    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	W1003 20:36:31.128129    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:36:31.170879    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000-m02" ...
	I1003 20:36:31.191480    4951 main.go:141] libmachine: (ha-214000-m02) Calling .Start
	I1003 20:36:31.191791    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.191820    4951 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:36:31.191892    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:36:31.219578    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:36:31.219600    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:36:31.219761    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219849    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:36:31.219889    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:36:31.219902    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:36:31.221267    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Pid is 4978
	I1003 20:36:31.221656    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:36:31.221669    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.221749    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4978
	I1003 20:36:31.222942    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:36:31.223055    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:36:31.223074    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 20:36:31.223092    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff619f}
	I1003 20:36:31.223117    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:36:31.223134    4951 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:36:31.223155    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:36:31.223858    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:36:31.224037    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.224458    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:36:31.224468    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.224583    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:36:31.224679    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:36:31.224777    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.224929    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.225026    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:36:31.225183    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:31.225340    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:36:31.225347    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:36:31.232364    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:36:31.241337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:36:31.242541    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.242561    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.242572    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.242585    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.630094    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:36:31.630110    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:36:31.744778    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.744796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.744827    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.744846    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.745666    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:36:31.745681    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:36:37.337247    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:36:37.337337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:36:37.337350    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:36:37.361028    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:37:06.292112    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:37:06.292127    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292262    4951 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:37:06.292277    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292374    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.292454    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.292532    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292617    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292696    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.292835    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.292968    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.292976    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:37:06.362584    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:37:06.362599    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.362740    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.362851    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.362945    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.363048    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.363204    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.363366    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.363377    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:37:06.429246    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:37:06.429262    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:37:06.429275    4951 buildroot.go:174] setting up certificates
	I1003 20:37:06.429281    4951 provision.go:84] configureAuth start
	I1003 20:37:06.429287    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.429430    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:06.429529    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.429617    4951 provision.go:143] copyHostCerts
	I1003 20:37:06.429649    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429696    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:37:06.429701    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429820    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:37:06.430049    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430079    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:37:06.430084    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430193    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:37:06.430369    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430399    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:37:06.430404    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430485    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:37:06.430651    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:37:06.504641    4951 provision.go:177] copyRemoteCerts
	I1003 20:37:06.504702    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:37:06.504733    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.504884    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.504988    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.505086    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.505168    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:06.541867    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:37:06.541936    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:37:06.560930    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:37:06.560992    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:37:06.579917    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:37:06.579984    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:37:06.599634    4951 provision.go:87] duration metric: took 170.34603ms to configureAuth
	I1003 20:37:06.599649    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:37:06.599816    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:37:06.599829    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:06.599963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.600044    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.600140    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600213    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600306    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.600434    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.600557    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.600564    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:37:06.660138    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:37:06.660150    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:37:06.660232    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:37:06.660242    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.660378    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.660498    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660607    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660708    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.660861    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.661001    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.661049    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:37:06.728946    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:37:06.728963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.729096    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.729209    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729384    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.729544    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.729682    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.729693    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:37:08.289911    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:37:08.289925    4951 machine.go:96] duration metric: took 37.065461315s to provisionDockerMachine
	I1003 20:37:08.289933    4951 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:37:08.289944    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:37:08.289954    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.290150    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:37:08.290163    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.290256    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.290347    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.290425    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.290523    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.325637    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:37:08.328747    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:37:08.328757    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:37:08.328838    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:37:08.328975    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:37:08.328981    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:37:08.329139    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:37:08.336279    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:37:08.355765    4951 start.go:296] duration metric: took 65.822719ms for postStartSetup
	I1003 20:37:08.355783    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.355979    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:37:08.355992    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.356088    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.356171    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.356261    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.356337    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.391155    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:37:08.391224    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:37:08.443555    4951 fix.go:56] duration metric: took 37.330343063s for fixHost
	I1003 20:37:08.443608    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.443871    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.444091    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.444747    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:08.444947    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:08.444959    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:37:08.504053    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013028.627108120
	
	I1003 20:37:08.504066    4951 fix.go:216] guest clock: 1728013028.627108120
	I1003 20:37:08.504071    4951 fix.go:229] Guest: 2024-10-03 20:37:08.62710812 -0700 PDT Remote: 2024-10-03 20:37:08.443578 -0700 PDT m=+80.177024984 (delta=183.53012ms)
	I1003 20:37:08.504082    4951 fix.go:200] guest clock delta is within tolerance: 183.53012ms
	I1003 20:37:08.504087    4951 start.go:83] releasing machines lock for "ha-214000-m02", held for 37.390896714s
	I1003 20:37:08.504111    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.504258    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:08.525607    4951 out.go:177] * Found network options:
	I1003 20:37:08.567619    4951 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:37:08.588274    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.588315    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589205    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589467    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589610    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:37:08.589649    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:37:08.589687    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.589812    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:37:08.589832    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.589864    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590034    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590064    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590259    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590278    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590517    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.590537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590701    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:37:08.623322    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:37:08.623398    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:37:08.670987    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:37:08.671009    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.671107    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:08.687184    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:37:08.696174    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:37:08.705216    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:37:08.705268    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:37:08.714371    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.723383    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:37:08.732289    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.741295    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:37:08.750471    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:37:08.759323    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:37:08.768482    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:37:08.777704    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:37:08.785806    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:37:08.785866    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:37:08.794894    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:37:08.803171    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:08.897940    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:37:08.916833    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.916918    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:37:08.930156    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.942286    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:37:08.960158    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.971885    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:08.982659    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:37:08.999726    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:09.010351    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:09.025433    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:37:09.028502    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:37:09.035822    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:37:09.049466    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:37:09.162468    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:37:09.273558    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:37:09.273582    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:37:09.288188    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:09.384897    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:38:10.406862    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021950572s)
	I1003 20:38:10.406948    4951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:38:10.444120    4951 out.go:201] 
	W1003 20:38:10.464959    4951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:37:06 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391345461Z" level=info msg="Starting up"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391833106Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.395520305Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.412871636Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427882861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427981520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428050653Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428085226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428277072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428327604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428478894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428520070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428552138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428580964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428720722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428931280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430522141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430571354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430698188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430740032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430878079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430929217Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431351881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431440610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431485738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431519039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431551337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431619359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431825238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431902729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431941069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431978377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432012357Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432042063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432070459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432099321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432133473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432169855Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432202720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432268312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432315741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432351145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432383859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432414347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432447070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432476073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432510884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432548105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432578396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432608431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432640682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432669603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432698487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432729184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432768850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432801425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432829061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432911216Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432958882Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432989050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433017196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433045319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433074497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433102613Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433279017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433339149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433390358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433425703Z" level=info msg="containerd successfully booted in 0.021412s"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.415071774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.421056219Z" level=info msg="Loading containers: start."
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.500314931Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.331296883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.376605057Z" level=info msg="Loading containers: done."
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387546240Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387606581Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387647157Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387769053Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411526135Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411682523Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:37:08 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527035720Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:37:09 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527893788Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528149338Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528188105Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528221468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:10 ha-214000-m02 dockerd[929]: time="2024-10-04T03:37:10.559000347Z" level=info msg="Starting up"
	Oct 04 03:38:10 ha-214000-m02 dockerd[929]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:38:10.465066    4951 out.go:270] * 
	W1003 20:38:10.466299    4951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:38:10.543824    4951 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547509520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547587217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547600394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547679278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 cri-dockerd[1370]: time="2024-10-04T03:36:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc56c1f3c299c74527bc4bad7199ef2947f06a7fa736aaf71ff605e8aa07e0ac/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613336411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613472160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613483466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613584473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646020305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646150537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646177738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646306268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688829574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688917158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688931527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.689001023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:15 ha-214000 dockerd[1116]: time="2024-10-04T03:37:15.932971006Z" level=info msg="ignoring event" container=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933175492Z" level=info msg="shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933213613Z" level=warning msg="cleaning up after shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933220107Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691601551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691989127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692103682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692303810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ebd8d90ba3e8f       6e38f40d628db                                                                                         45 seconds ago       Running             storage-provisioner       2                   f61fdecdb5ed1       storage-provisioner
	f9fb6aeea4b68       12968670680f4                                                                                         About a minute ago   Running             kindnet-cni               1                   fc56c1f3c299c       kindnet-flq8x
	e388df4554b33       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   2401eafe0bd31       busybox-7dff88458-m7hqf
	e6ef332ed5737       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   c142be8b44551       coredns-7c65d6cfc9-slrtf
	985956e1cb3da       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   bf51af5037cab       coredns-7c65d6cfc9-l4wpg
	666390dc434d9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   f61fdecdb5ed1       storage-provisioner
	e870db0c09c44       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   69d6d030cf38a       kube-proxy-grxks
	2bccf57dd1cf7       18b729c2288dc                                                                                         About a minute ago   Running             kube-vip                  0                   013ce7946a369       kube-vip-ha-214000
	5c0e6f76f23f0       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   3b875ceff5048       kube-scheduler-ha-214000
	3a34ed1393f8c       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   9863db4133f6a       etcd-ha-214000
	18a77afff888c       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            1                   61526ecfca3d5       kube-apiserver-ha-214000
	bf67ec881904c       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   e9d8b9ee53b05       kube-controller-manager-ha-214000
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   24 minutes ago       Exited              busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         25 minutes ago       Exited              coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         25 minutes ago       Exited              coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              25 minutes ago       Exited              kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         26 minutes ago       Exited              kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	95af0d749f454       6bab7719df100                                                                                         26 minutes ago       Exited              kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         26 minutes ago       Exited              kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         26 minutes ago       Exited              etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [985956e1cb3d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58650 - 4380 "HINFO IN 7121940411115309935.5046063770853036442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044429796s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[262325284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.911) (total time: 30001ms):
	Trace[262325284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[262325284]: [30.001612351s] [30.001612351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1313713214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30002ms):
	Trace[1313713214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.914)
	Trace[1313713214]: [30.00243392s] [30.00243392s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1235317752]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30003ms):
	Trace[1235317752]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.915)
	Trace[1235317752]: [30.003174126s] [30.003174126s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6ef332ed573] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60885 - 59353 "HINFO IN 8975973012052876199.2679720306794618198. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011598991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1634039844]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30003ms):
	Trace[1634039844]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.913)
	Trace[1634039844]: [30.003123911s] [30.003123911s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1181919593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30001ms):
	Trace[1181919593]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.914)
	Trace[1181919593]: [30.001966872s] [30.001966872s] END
	[INFO] plugin/kubernetes: Trace[1826819322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30001ms):
	Trace[1826819322]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[1826819322]: [30.001980832s] [30.001980832s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:38:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cf440f8eb534a62b20c31c760022e88
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    a841ad05-f0b0-46f0-962d-fb6544f3eb77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 26m                  kube-proxy       
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  Starting                 26m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26m (x3 over 26m)    kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x3 over 26m)    kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x2 over 26m)    kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    26m                  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  26m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m                  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     26m                  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  Starting                 26m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                  node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                25m                  kubelet          Node ha-214000 status is now: NodeReady
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                  node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-214000-m03 status is now: NodeReady
	  Normal  RegisteredNode           89s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeNotReady             49s                node-controller  Node ha-214000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036162] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007697] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.692433] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006843] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.638785] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.210765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 4 03:36] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.104887] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +1.918276] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.255425] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +0.098027] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.126300] systemd-fstab-generator[1107]: Ignoring "noauto" option for root device
	[  +2.450359] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.101909] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.051667] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.054007] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +0.136592] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.448600] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +6.747516] kauditd_printk_skb: 88 callbacks suppressed
	[  +7.915746] kauditd_printk_skb: 40 callbacks suppressed
	[Oct 4 03:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:27:00.215476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-04T03:27:00.217042Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"1.236946ms","hash":1433174615,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2023424,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-04T03:27:00.217099Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1433174615,"revision":1514,"compact-revision":973}
	{"level":"info","ts":"2024-10-04T03:31:28.060845Z","caller":"traceutil/trace.go:171","msg":"trace[860112081] transaction","detail":"{read_only:false; response_revision:2637; number_of_response:1; }","duration":"112.489562ms","start":"2024-10-04T03:31:27.948335Z","end":"2024-10-04T03:31:28.060825Z","steps":["trace[860112081] 'process raft request'  (duration: 91.094323ms)","trace[860112081] 'compare'  (duration: 21.269614ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:31:44.553900Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T03:31:44.553958Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	{"level":"warn","ts":"2024-10-04T03:31:44.554007Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554028Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554108Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.562422Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T03:31:44.579712Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-10-04T03:31:44.581173Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581242Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581251Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [3a34ed1393f8] <==
	{"level":"info","ts":"2024-10-04T03:36:37.939034Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"b8c6c7563d17d844","added-peer-peer-urls":["https://192.169.0.5:2380"]}
	{"level":"info","ts":"2024-10-04T03:36:37.939522Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:36:37.939623Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:36:37.934723Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:36:37.946369Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T03:36:37.949462Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b8c6c7563d17d844","initial-advertise-peer-urls":["https://192.169.0.5:2380"],"listen-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T03:36:37.949681Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T03:36:37.950004Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:36:37.950064Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:36:39.488342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T03:36:39.488577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T03:36:39.488648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:36:39.488708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.488812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.488875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.488941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.490388Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:36:39.490461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:36:39.490481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:36:39.490648Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:36:39.491343Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:36:39.491918Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:36:39.492286Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:36:39.492642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:36:39.493003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	
	
	==> kernel <==
	 03:38:12 up 2 min,  0 users,  load average: 0.28, 0.18, 0.07
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:30:43.497755       1 main.go:299] handling current node
	I1004 03:30:53.496402       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:53.496595       1 main.go:299] handling current node
	I1004 03:30:53.496647       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:53.496795       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:03.496468       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:03.496619       1 main.go:299] handling current node
	I1004 03:31:03.496645       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:03.496656       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:13.497200       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:13.497236       1 main.go:299] handling current node
	I1004 03:31:13.497252       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:13.497259       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:23.497508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:23.497727       1 main.go:299] handling current node
	I1004 03:31:23.497777       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:23.497873       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:33.499104       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:33.499148       1 main.go:299] handling current node
	I1004 03:31:33.499160       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:33.499165       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:43.499561       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:43.499582       1 main.go:299] handling current node
	I1004 03:31:43.499592       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:43.499596       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f9fb6aeea4b6] <==
	I1004 03:37:07.023374       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:17.024006       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:17.024029       1 main.go:299] handling current node
	I1004 03:37:17.024041       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:17.024046       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:27.016102       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:27.016328       1 main.go:299] handling current node
	I1004 03:37:27.016461       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:27.016567       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:37.022430       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:37.022517       1 main.go:299] handling current node
	I1004 03:37:37.022541       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:37.022565       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:47.014804       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:47.015083       1 main.go:299] handling current node
	I1004 03:37:47.015247       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:47.015330       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:57.018026       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:57.018097       1 main.go:299] handling current node
	I1004 03:37:57.018115       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:57.018147       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:38:07.020920       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:38:07.021128       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:38:07.021544       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:38:07.021652       1 main.go:299] handling current node
	
	
	==> kube-apiserver [18a77afff888] <==
	I1004 03:36:40.325058       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1004 03:36:40.325213       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 03:36:40.334937       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1004 03:36:40.335017       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1004 03:36:40.364395       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:36:40.364604       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:36:40.364759       1 policy_source.go:224] refreshing policies
	I1004 03:36:40.374385       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:36:40.418647       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 03:36:40.423571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:36:40.423778       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:36:40.423914       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:36:40.424647       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:36:40.424699       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:36:40.425567       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:36:40.435139       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:36:40.435487       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:36:40.435554       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:36:40.435596       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:36:40.435678       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:36:40.437733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:36:41.323990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1004 03:36:41.538108       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:36:41.539664       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:36:41.543233       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [95af0d749f45] <==
	W1004 03:31:45.565832       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.564688       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565697       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.563880       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565578       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571249       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571472       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571633       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571818       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572042       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572188       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572478       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572615       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572727       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572882       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572220       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572633       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572900       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572324       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572348       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572056       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572740       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572444       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.573046       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572405       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	I1004 03:26:22.202705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:28:54.315206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:31:28.798824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-controller-manager [bf67ec881904] <==
	I1004 03:36:43.895282       1 shared_informer.go:320] Caches are synced for disruption
	I1004 03:36:43.902289       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:36:44.322305       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:36:44.400906       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:36:44.400944       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:36:44.533941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:36:44.711601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.09172ms"
	I1004 03:36:44.711874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.172109ms"
	I1004 03:36:44.712697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.643µs"
	I1004 03:36:44.714076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.195µs"
	I1004 03:36:44.767934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.149435ms"
	I1004 03:36:44.769462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.061µs"
	I1004 03:36:45.976463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.398204ms"
	I1004 03:36:45.976597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.63µs"
	I1004 03:36:45.994009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="29.016µs"
	I1004 03:36:46.014850       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.508µs"
	I1004 03:37:23.711256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:37:23.721566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:37:23.723577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.436153ms"
	I1004 03:37:23.723631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.552µs"
	I1004 03:37:24.931777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.362084ms"
	I1004 03:37:24.932459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.953µs"
	I1004 03:37:24.946696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.287142ms"
	I1004 03:37:24.946977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="189.563µs"
	I1004 03:37:28.753714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e870db0c09c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:36:45.742770       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:36:45.775231       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:36:45.775291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:36:45.922303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:36:45.922329       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:36:45.922347       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:36:45.927222       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:36:45.928115       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:36:45.928127       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:45.937610       1 config.go:199] "Starting service config controller"
	I1004 03:36:45.937639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:36:45.937654       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:36:45.937658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:36:45.937932       1 config.go:328] "Starting node config controller"
	I1004 03:36:45.937937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:36:46.038944       1 shared_informer.go:320] Caches are synced for node config
	I1004 03:36:46.039004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:36:46.051315       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5c0e6f76f23f] <==
	I1004 03:36:38.366946       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:36:40.340041       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:36:40.340076       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:36:40.340085       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:36:40.340089       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:36:40.388605       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:36:40.388643       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:40.391116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:36:40.391386       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:36:40.391458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:36:40.391415       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:36:40.493018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:31:44.485971       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1004 03:31:44.486818       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1004 03:31:44.487813       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1004 03:31:44.490023       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.527318    1532 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.527755    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.607587    1532 apiserver.go:52] "Watching apiserver"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.613117    1532 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-214000" podUID="ca847c50-343b-4c77-ab73-48b82beb80d0"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.616386    1532 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.628430    1532 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-214000"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.640305    1532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efb57dcf8e6b9d435d22c9f59e8f0eb" path="/var/lib/kubelet/pods/9efb57dcf8e6b9d435d22c9f59e8f0eb/volumes"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.650788    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f5e9cfaf-fc93-45bd-9061-cf51f9eef735-tmp\") pod \"storage-provisioner\" (UID: \"f5e9cfaf-fc93-45bd-9061-cf51f9eef735\") " pod="kube-system/storage-provisioner"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.650951    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-cni-cfg\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.651002    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-lib-modules\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.651585    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-xtables-lock\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652013    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-xtables-lock\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652345    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-lib-modules\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.667905    1532 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.744827    1532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-214000" podStartSLOduration=0.744814896 podStartE2EDuration="744.814896ms" podCreationTimestamp="2024-10-04 03:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 03:36:44.735075165 +0000 UTC m=+14.226297815" watchObservedRunningTime="2024-10-04 03:36:44.744814896 +0000 UTC m=+14.236037540"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296434    1532 scope.go:117] "RemoveContainer" containerID="792bd20fa10c95874d8ad89fc2ecf38b64e23df2d19d9b348cf3e9c46121c1b2"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296668    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: E1004 03:37:16.296799    1532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f5e9cfaf-fc93-45bd-9061-cf51f9eef735)\"" pod="kube-system/storage-provisioner" podUID="f5e9cfaf-fc93-45bd-9061-cf51f9eef735"
	Oct 04 03:37:26 ha-214000 kubelet[1532]: I1004 03:37:26.640771    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: I1004 03:37:30.670798    1532 scope.go:117] "RemoveContainer" containerID="2e5127305b39f8d6e99e701a21860eb86b129da510647193574f5beeb8153b48"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: E1004 03:37:30.694461    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  89s (x2 over 93s)   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  14m (x3 over 24m)   default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m9s (x3 over 12m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (146.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:415: expected profile "ha-214000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMAC
ount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":f
alse,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetric
s\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (2.97604189s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node start m02 -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000 -v=7               | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-214000 -v=7                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT | 03 Oct 24 20:31 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	| node    | ha-214000 node delete m03 -v=7       | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-214000 stop -v=7                  | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT | 03 Oct 24 20:35 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true             | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:35 PDT |                     |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=hyperkit                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:35:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:35:48.304540    4951 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:48.304733    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304739    4951 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:48.304743    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304927    4951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:35:48.306332    4951 out.go:352] Setting JSON to false
	I1003 20:35:48.334066    4951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3918,"bootTime":1728009030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:35:48.334215    4951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:35:48.356076    4951 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:35:48.398703    4951 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:35:48.398800    4951 notify.go:220] Checking for updates...
	I1003 20:35:48.442667    4951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:35:48.463910    4951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:35:48.485340    4951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:35:48.506572    4951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:35:48.527740    4951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:35:48.550278    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:48.551029    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.551094    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.563226    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51755
	I1003 20:35:48.563804    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.564307    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.564319    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.564662    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.564822    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.565117    4951 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:35:48.565435    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.565487    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.576762    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51757
	I1003 20:35:48.577263    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.577677    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.577713    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.578069    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.578299    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.610723    4951 out.go:177] * Using the hyperkit driver based on existing profile
	I1003 20:35:48.652521    4951 start.go:297] selected driver: hyperkit
	I1003 20:35:48.652550    4951 start.go:901] validating driver "hyperkit" against &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.652818    4951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:35:48.653002    4951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.653249    4951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:35:48.665237    4951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:35:48.671535    4951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.671574    4951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:35:48.676549    4951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:35:48.676588    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:35:48.676625    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:35:48.676690    4951 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.1
69.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.676815    4951 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.698601    4951 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:35:48.740785    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:35:48.740857    4951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:35:48.740884    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:35:48.741146    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:35:48.741164    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:35:48.741343    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.742237    4951 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:35:48.742380    4951 start.go:364] duration metric: took 119.499µs to acquireMachinesLock for "ha-214000"
	I1003 20:35:48.742414    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:35:48.742428    4951 fix.go:54] fixHost starting: 
	I1003 20:35:48.742857    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.742889    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.754302    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51759
	I1003 20:35:48.754621    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.754990    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.755005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.755241    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.755370    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.755459    4951 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:35:48.755544    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.755632    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:35:48.756648    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.756678    4951 fix.go:112] recreateIfNeeded on ha-214000: state=Stopped err=<nil>
	I1003 20:35:48.756695    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	W1003 20:35:48.756784    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:35:48.778933    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000" ...
	I1003 20:35:48.800930    4951 main.go:141] libmachine: (ha-214000) Calling .Start
	I1003 20:35:48.801199    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.801247    4951 main.go:141] libmachine: (ha-214000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:35:48.803311    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.803325    4951 main.go:141] libmachine: (ha-214000) DBG | pid 4822 is in state "Stopped"
	I1003 20:35:48.803341    4951 main.go:141] libmachine: (ha-214000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid...
	I1003 20:35:48.803610    4951 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:35:48.922193    4951 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:35:48.922226    4951 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:35:48.922379    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922424    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922546    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:35:48.922605    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:35:48.922622    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:35:48.924313    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Pid is 4964
	I1003 20:35:48.924838    4951 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:35:48.924852    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.924911    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4964
	I1003 20:35:48.927353    4951 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:35:48.927405    4951 main.go:141] libmachine: (ha-214000) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:35:48.927432    4951 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6fc2}
	I1003 20:35:48.927443    4951 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:35:48.927454    4951 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:35:48.927543    4951 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:35:48.928494    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:35:48.928701    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.929276    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:35:48.929289    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.929410    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:35:48.929535    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:35:48.929649    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929777    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929900    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:35:48.930094    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:35:48.930303    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:35:48.930312    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:35:48.935400    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:35:48.990306    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:35:48.991238    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:48.991260    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:48.991278    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:48.991294    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.374490    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:35:49.374504    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:35:49.489812    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:49.489840    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:49.489854    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:49.489865    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.490699    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:35:49.490709    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:35:55.079541    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:35:55.079635    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:35:55.079652    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:35:55.103846    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:36:23.994265    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:36:23.994281    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994427    4951 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:36:23.994438    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994568    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:23.994676    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:23.994778    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994888    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994989    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:23.995134    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:23.995292    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:23.995301    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:36:24.061419    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:36:24.061438    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.061566    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.061665    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061761    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061855    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.062009    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.062160    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.062171    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:36:24.123229    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:36:24.123250    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:36:24.123267    4951 buildroot.go:174] setting up certificates
	I1003 20:36:24.123274    4951 provision.go:84] configureAuth start
	I1003 20:36:24.123280    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:24.123436    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:24.123534    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.123640    4951 provision.go:143] copyHostCerts
	I1003 20:36:24.123670    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123751    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:36:24.123759    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123933    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:36:24.124159    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124208    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:36:24.124213    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124299    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:36:24.124456    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124504    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:36:24.124508    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124593    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:36:24.124759    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:36:24.242470    4951 provision.go:177] copyRemoteCerts
	I1003 20:36:24.242536    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:36:24.242550    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.242680    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.242779    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.242882    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.242976    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:24.278106    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:36:24.278181    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:36:24.297749    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:36:24.297814    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 20:36:24.317337    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:36:24.317417    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:36:24.337360    4951 provision.go:87] duration metric: took 214.07513ms to configureAuth
	I1003 20:36:24.337374    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:36:24.337568    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:24.337582    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:24.337722    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.337811    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.337893    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.337973    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.338066    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.338199    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.338322    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.338329    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:36:24.392942    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:36:24.392953    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:36:24.393026    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:36:24.393038    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.393177    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.393275    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393375    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393458    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.393607    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.393746    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.393789    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:36:24.457890    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:36:24.457915    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.458049    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.458145    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458223    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458324    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.458459    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.458606    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.458617    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:36:26.102134    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:36:26.102148    4951 machine.go:96] duration metric: took 37.172864722s to provisionDockerMachine
	I1003 20:36:26.102162    4951 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:36:26.102174    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:36:26.102184    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.102399    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:36:26.102415    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.102503    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.102602    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.102703    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.102803    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.136711    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:36:26.139862    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:36:26.139874    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:36:26.139975    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:36:26.140193    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:36:26.140200    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:36:26.140451    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:36:26.147627    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:26.167774    4951 start.go:296] duration metric: took 65.6041ms for postStartSetup
	I1003 20:36:26.167794    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.167968    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:36:26.167979    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.168089    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.168182    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.168259    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.168350    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.202842    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:36:26.202914    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:36:26.255647    4951 fix.go:56] duration metric: took 37.513223093s for fixHost
	I1003 20:36:26.255670    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.255816    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.255918    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256012    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256105    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.256247    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:26.256399    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:26.256406    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:36:26.311780    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012986.433392977
	
	I1003 20:36:26.311792    4951 fix.go:216] guest clock: 1728012986.433392977
	I1003 20:36:26.311797    4951 fix.go:229] Guest: 2024-10-03 20:36:26.433392977 -0700 PDT Remote: 2024-10-03 20:36:26.25566 -0700 PDT m=+37.989104353 (delta=177.732977ms)
	I1003 20:36:26.311814    4951 fix.go:200] guest clock delta is within tolerance: 177.732977ms
	I1003 20:36:26.311818    4951 start.go:83] releasing machines lock for "ha-214000", held for 37.569431066s
	I1003 20:36:26.311838    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.311964    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:26.312074    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312353    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312465    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312560    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:36:26.312588    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312635    4951 ssh_runner.go:195] Run: cat /version.json
	I1003 20:36:26.312646    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312690    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312745    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312781    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312825    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312873    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.312925    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.313009    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.313022    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.345222    4951 ssh_runner.go:195] Run: systemctl --version
	I1003 20:36:26.396121    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:36:26.401139    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:36:26.401189    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:36:26.413838    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:36:26.413851    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.413956    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.430665    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:36:26.439518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:36:26.448241    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:36:26.448295    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:36:26.457135    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.465984    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:36:26.474764    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.483576    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:36:26.492518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:36:26.501284    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:36:26.510114    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:36:26.518992    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:36:26.527133    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:36:26.527188    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:36:26.536233    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:36:26.544367    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:26.641761    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:36:26.661796    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.661912    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:36:26.678816    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.689242    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:36:26.701530    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.713140    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.724511    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:36:26.748353    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.759647    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.774287    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:36:26.777216    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:36:26.785211    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:36:26.800364    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:36:26.895359    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:36:27.004148    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:36:27.004239    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:36:27.018268    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:27.118971    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:36:29.441016    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.322026405s)
	I1003 20:36:29.441097    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:36:29.451786    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.462092    4951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:36:29.564537    4951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:36:29.669649    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.781720    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:36:29.795175    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.806194    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.917885    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:36:29.986582    4951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:36:29.986686    4951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:36:29.991213    4951 start.go:563] Will wait 60s for crictl version
	I1003 20:36:29.991273    4951 ssh_runner.go:195] Run: which crictl
	I1003 20:36:29.994306    4951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:36:30.019989    4951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:36:30.020072    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.036824    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.075524    4951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:36:30.075569    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:30.076023    4951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:36:30.080492    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.091206    4951 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:36:30.091284    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:30.091356    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.103771    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.103786    4951 docker.go:615] Images already preloaded, skipping extraction
	I1003 20:36:30.103870    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.126324    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.126343    4951 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:36:30.126351    4951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:36:30.126423    4951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:36:30.126505    4951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:36:30.165944    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:36:30.165958    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:36:30.165970    4951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:36:30.165987    4951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:36:30.166068    4951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:36:30.166080    4951 kube-vip.go:115] generating kube-vip config ...
	I1003 20:36:30.166149    4951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:36:30.180124    4951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:36:30.180189    4951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:36:30.180256    4951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:36:30.189222    4951 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:36:30.189287    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:36:30.198523    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:36:30.212259    4951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:36:30.225613    4951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:36:30.239086    4951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1003 20:36:30.252640    4951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:36:30.255560    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.265017    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:30.361055    4951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:36:30.373903    4951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:36:30.373915    4951 certs.go:194] generating shared ca certs ...
	I1003 20:36:30.373925    4951 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.374133    4951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:36:30.374229    4951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:36:30.374245    4951 certs.go:256] generating profile certs ...
	I1003 20:36:30.374372    4951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:36:30.374395    4951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9
	I1003 20:36:30.374412    4951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1003 20:36:30.510048    4951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 ...
	I1003 20:36:30.510064    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9: {Name:mkec630c178c10067131af2c5f3c9dd0e1fb1860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510503    4951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 ...
	I1003 20:36:30.510513    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9: {Name:mk3eade5c23e406463c386755ec0dc38e869ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510763    4951 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:36:30.511004    4951 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:36:30.511276    4951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:36:30.511286    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:36:30.511308    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:36:30.511328    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:36:30.511347    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:36:30.511373    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:36:30.511393    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:36:30.511411    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:36:30.511428    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:36:30.511527    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:36:30.511580    4951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:36:30.511594    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:36:30.511627    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:36:30.511660    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:36:30.511688    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:36:30.511757    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:30.511791    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.511811    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.511829    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.512286    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:36:30.547800    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:36:30.588463    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:36:30.624659    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:36:30.646082    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:36:30.665519    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:36:30.684966    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:36:30.704971    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:36:30.724730    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:36:30.744135    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:36:30.763735    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:36:30.782963    4951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:36:30.796275    4951 ssh_runner.go:195] Run: openssl version
	I1003 20:36:30.800456    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:36:30.808784    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812168    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812211    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.816317    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:36:30.824743    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:36:30.833176    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836568    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836613    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.840895    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:36:30.849202    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:36:30.857643    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861134    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861184    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.865411    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:36:30.873865    4951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:36:30.877389    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:36:30.881788    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:36:30.886088    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:36:30.890422    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:36:30.894596    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:36:30.898773    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:36:30.902881    4951 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:36:30.902998    4951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:36:30.915213    4951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:36:30.923319    4951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:36:30.923331    4951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:36:30.923384    4951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:36:30.930635    4951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:36:30.930978    4951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.931055    4951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1440/kubeconfig needs updating (will repair): [kubeconfig missing "ha-214000" cluster setting kubeconfig missing "ha-214000" context setting]
	I1003 20:36:30.931232    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.931928    4951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.932136    4951 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd994f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:36:30.932465    4951 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:36:30.932658    4951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:36:30.939898    4951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1003 20:36:30.939909    4951 kubeadm.go:597] duration metric: took 16.574315ms to restartPrimaryControlPlane
	I1003 20:36:30.939914    4951 kubeadm.go:394] duration metric: took 37.038509ms to StartCluster
	I1003 20:36:30.939939    4951 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940028    4951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.940366    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940584    4951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:36:30.940597    4951 start.go:241] waiting for startup goroutines ...
	I1003 20:36:30.940605    4951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:36:30.940715    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:30.982685    4951 out.go:177] * Enabled addons: 
	I1003 20:36:31.003752    4951 addons.go:510] duration metric: took 63.132383ms for enable addons: enabled=[]
	I1003 20:36:31.003791    4951 start.go:246] waiting for cluster config update ...
	I1003 20:36:31.003802    4951 start.go:255] writing updated cluster config ...
	I1003 20:36:31.026641    4951 out.go:201] 
	I1003 20:36:31.047648    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:31.047721    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.069716    4951 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:36:31.111550    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:31.111584    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:36:31.111814    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:36:31.111847    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:36:31.111978    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.113032    4951 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:36:31.113184    4951 start.go:364] duration metric: took 124.813µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:36:31.113203    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:36:31.113208    4951 fix.go:54] fixHost starting: m02
	I1003 20:36:31.113580    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:36:31.113606    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:36:31.125064    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I1003 20:36:31.125517    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:36:31.125993    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:36:31.126005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:36:31.126252    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:36:31.126414    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.126604    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:36:31.126798    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.126890    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:36:31.127965    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:36:31.127999    4951 fix.go:112] recreateIfNeeded on ha-214000-m02: state=Stopped err=<nil>
	I1003 20:36:31.128009    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	W1003 20:36:31.128129    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:36:31.170879    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000-m02" ...
	I1003 20:36:31.191480    4951 main.go:141] libmachine: (ha-214000-m02) Calling .Start
	I1003 20:36:31.191791    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.191820    4951 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:36:31.191892    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:36:31.219578    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:36:31.219600    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:36:31.219761    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219849    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:36:31.219889    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:36:31.219902    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:36:31.221267    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Pid is 4978
	I1003 20:36:31.221656    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:36:31.221669    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.221749    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4978
	I1003 20:36:31.222942    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:36:31.223055    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:36:31.223074    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 20:36:31.223092    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff619f}
	I1003 20:36:31.223117    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:36:31.223134    4951 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:36:31.223155    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:36:31.223858    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:36:31.224037    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.224458    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:36:31.224468    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.224583    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:36:31.224679    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:36:31.224777    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.224929    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.225026    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:36:31.225183    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:31.225340    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:36:31.225347    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:36:31.232364    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:36:31.241337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:36:31.242541    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.242561    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.242572    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.242585    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.630094    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:36:31.630110    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:36:31.744778    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.744796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.744827    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.744846    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.745666    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:36:31.745681    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:36:37.337247    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:36:37.337337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:36:37.337350    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:36:37.361028    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:37:06.292112    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:37:06.292127    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292262    4951 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:37:06.292277    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292374    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.292454    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.292532    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292617    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292696    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.292835    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.292968    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.292976    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:37:06.362584    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:37:06.362599    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.362740    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.362851    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.362945    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.363048    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.363204    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.363366    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.363377    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:37:06.429246    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:37:06.429262    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:37:06.429275    4951 buildroot.go:174] setting up certificates
	I1003 20:37:06.429281    4951 provision.go:84] configureAuth start
	I1003 20:37:06.429287    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.429430    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:06.429529    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.429617    4951 provision.go:143] copyHostCerts
	I1003 20:37:06.429649    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429696    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:37:06.429701    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429820    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:37:06.430049    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430079    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:37:06.430084    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430193    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:37:06.430369    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430399    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:37:06.430404    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430485    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:37:06.430651    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:37:06.504641    4951 provision.go:177] copyRemoteCerts
	I1003 20:37:06.504702    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:37:06.504733    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.504884    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.504988    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.505086    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.505168    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:06.541867    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:37:06.541936    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:37:06.560930    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:37:06.560992    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:37:06.579917    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:37:06.579984    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:37:06.599634    4951 provision.go:87] duration metric: took 170.34603ms to configureAuth
	I1003 20:37:06.599649    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:37:06.599816    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:37:06.599829    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:06.599963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.600044    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.600140    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600213    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600306    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.600434    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.600557    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.600564    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:37:06.660138    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:37:06.660150    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:37:06.660232    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:37:06.660242    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.660378    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.660498    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660607    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660708    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.660861    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.661001    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.661049    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:37:06.728946    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:37:06.728963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.729096    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.729209    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729384    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.729544    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.729682    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.729693    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:37:08.289911    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:37:08.289925    4951 machine.go:96] duration metric: took 37.065461315s to provisionDockerMachine
	I1003 20:37:08.289933    4951 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:37:08.289944    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:37:08.289954    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.290150    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:37:08.290163    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.290256    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.290347    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.290425    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.290523    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.325637    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:37:08.328747    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:37:08.328757    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:37:08.328838    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:37:08.328975    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:37:08.328981    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:37:08.329139    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:37:08.336279    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:37:08.355765    4951 start.go:296] duration metric: took 65.822719ms for postStartSetup
	I1003 20:37:08.355783    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.355979    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:37:08.355992    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.356088    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.356171    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.356261    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.356337    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.391155    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:37:08.391224    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:37:08.443555    4951 fix.go:56] duration metric: took 37.330343063s for fixHost
	I1003 20:37:08.443608    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.443871    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.444091    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.444747    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:08.444947    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:08.444959    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:37:08.504053    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013028.627108120
	
	I1003 20:37:08.504066    4951 fix.go:216] guest clock: 1728013028.627108120
	I1003 20:37:08.504071    4951 fix.go:229] Guest: 2024-10-03 20:37:08.62710812 -0700 PDT Remote: 2024-10-03 20:37:08.443578 -0700 PDT m=+80.177024984 (delta=183.53012ms)
	I1003 20:37:08.504082    4951 fix.go:200] guest clock delta is within tolerance: 183.53012ms
	I1003 20:37:08.504087    4951 start.go:83] releasing machines lock for "ha-214000-m02", held for 37.390896714s
	I1003 20:37:08.504111    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.504258    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:08.525607    4951 out.go:177] * Found network options:
	I1003 20:37:08.567619    4951 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:37:08.588274    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.588315    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589205    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589467    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589610    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:37:08.589649    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:37:08.589687    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.589812    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:37:08.589832    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.589864    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590034    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590064    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590259    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590278    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590517    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.590537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590701    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:37:08.623322    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:37:08.623398    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:37:08.670987    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:37:08.671009    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.671107    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:08.687184    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:37:08.696174    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:37:08.705216    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:37:08.705268    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:37:08.714371    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.723383    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:37:08.732289    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.741295    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:37:08.750471    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:37:08.759323    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:37:08.768482    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:37:08.777704    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:37:08.785806    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:37:08.785866    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:37:08.794894    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:37:08.803171    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:08.897940    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:37:08.916833    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.916918    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:37:08.930156    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.942286    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:37:08.960158    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.971885    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:08.982659    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:37:08.999726    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:09.010351    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:09.025433    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:37:09.028502    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:37:09.035822    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:37:09.049466    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:37:09.162468    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:37:09.273558    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:37:09.273582    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:37:09.288188    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:09.384897    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:38:10.406862    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021950572s)
	I1003 20:38:10.406948    4951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:38:10.444120    4951 out.go:201] 
	W1003 20:38:10.464959    4951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:37:06 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391345461Z" level=info msg="Starting up"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391833106Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.395520305Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.412871636Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427882861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427981520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428050653Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428085226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428277072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428327604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428478894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428520070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428552138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428580964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428720722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428931280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430522141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430571354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430698188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430740032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430878079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430929217Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431351881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431440610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431485738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431519039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431551337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431619359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431825238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431902729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431941069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431978377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432012357Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432042063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432070459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432099321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432133473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432169855Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432202720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432268312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432315741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432351145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432383859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432414347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432447070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432476073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432510884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432548105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432578396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432608431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432640682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432669603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432698487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432729184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432768850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432801425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432829061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432911216Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432958882Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432989050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433017196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433045319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433074497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433102613Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433279017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433339149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433390358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433425703Z" level=info msg="containerd successfully booted in 0.021412s"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.415071774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.421056219Z" level=info msg="Loading containers: start."
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.500314931Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.331296883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.376605057Z" level=info msg="Loading containers: done."
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387546240Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387606581Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387647157Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387769053Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411526135Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411682523Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:37:08 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527035720Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:37:09 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527893788Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528149338Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528188105Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528221468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:10 ha-214000-m02 dockerd[929]: time="2024-10-04T03:37:10.559000347Z" level=info msg="Starting up"
	Oct 04 03:38:10 ha-214000-m02 dockerd[929]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:38:10.465066    4951 out.go:270] * 
	W1003 20:38:10.466299    4951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:38:10.543824    4951 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547509520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547587217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547600394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547679278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 cri-dockerd[1370]: time="2024-10-04T03:36:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc56c1f3c299c74527bc4bad7199ef2947f06a7fa736aaf71ff605e8aa07e0ac/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613336411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613472160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613483466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613584473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646020305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646150537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646177738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646306268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688829574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688917158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688931527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.689001023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:15 ha-214000 dockerd[1116]: time="2024-10-04T03:37:15.932971006Z" level=info msg="ignoring event" container=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933175492Z" level=info msg="shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933213613Z" level=warning msg="cleaning up after shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933220107Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691601551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691989127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692103682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692303810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ebd8d90ba3e8f       6e38f40d628db                                                                                         49 seconds ago       Running             storage-provisioner       2                   f61fdecdb5ed1       storage-provisioner
	f9fb6aeea4b68       12968670680f4                                                                                         About a minute ago   Running             kindnet-cni               1                   fc56c1f3c299c       kindnet-flq8x
	e388df4554b33       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   2401eafe0bd31       busybox-7dff88458-m7hqf
	e6ef332ed5737       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   c142be8b44551       coredns-7c65d6cfc9-slrtf
	985956e1cb3da       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   bf51af5037cab       coredns-7c65d6cfc9-l4wpg
	666390dc434d9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   f61fdecdb5ed1       storage-provisioner
	e870db0c09c44       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   69d6d030cf38a       kube-proxy-grxks
	2bccf57dd1cf7       18b729c2288dc                                                                                         About a minute ago   Running             kube-vip                  0                   013ce7946a369       kube-vip-ha-214000
	5c0e6f76f23f0       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   3b875ceff5048       kube-scheduler-ha-214000
	3a34ed1393f8c       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   9863db4133f6a       etcd-ha-214000
	18a77afff888c       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            1                   61526ecfca3d5       kube-apiserver-ha-214000
	bf67ec881904c       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   e9d8b9ee53b05       kube-controller-manager-ha-214000
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   24 minutes ago       Exited              busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         25 minutes ago       Exited              coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         25 minutes ago       Exited              coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              26 minutes ago       Exited              kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         26 minutes ago       Exited              kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	95af0d749f454       6bab7719df100                                                                                         26 minutes ago       Exited              kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         26 minutes ago       Exited              kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         26 minutes ago       Exited              etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [985956e1cb3d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58650 - 4380 "HINFO IN 7121940411115309935.5046063770853036442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044429796s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[262325284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.911) (total time: 30001ms):
	Trace[262325284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[262325284]: [30.001612351s] [30.001612351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1313713214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30002ms):
	Trace[1313713214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.914)
	Trace[1313713214]: [30.00243392s] [30.00243392s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1235317752]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30003ms):
	Trace[1235317752]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.915)
	Trace[1235317752]: [30.003174126s] [30.003174126s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6ef332ed573] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60885 - 59353 "HINFO IN 8975973012052876199.2679720306794618198. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011598991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1634039844]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30003ms):
	Trace[1634039844]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.913)
	Trace[1634039844]: [30.003123911s] [30.003123911s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1181919593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30001ms):
	Trace[1181919593]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.914)
	Trace[1181919593]: [30.001966872s] [30.001966872s] END
	[INFO] plugin/kubernetes: Trace[1826819322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30001ms):
	Trace[1826819322]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[1826819322]: [30.001980832s] [30.001980832s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:38:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cf440f8eb534a62b20c31c760022e88
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    a841ad05-f0b0-46f0-962d-fb6544f3eb77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 26m                  kube-proxy       
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 26m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26m (x3 over 26m)    kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x3 over 26m)    kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x2 over 26m)    kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    26m                  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  26m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m                  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     26m                  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  Starting                 26m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                  node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                25m                  kubelet          Node ha-214000 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                  node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-214000-m03 status is now: NodeReady
	  Normal  RegisteredNode           93s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeNotReady             53s                node-controller  Node ha-214000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036162] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007697] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.692433] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006843] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.638785] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.210765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 4 03:36] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.104887] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +1.918276] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.255425] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +0.098027] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.126300] systemd-fstab-generator[1107]: Ignoring "noauto" option for root device
	[  +2.450359] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.101909] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.051667] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.054007] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +0.136592] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.448600] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +6.747516] kauditd_printk_skb: 88 callbacks suppressed
	[  +7.915746] kauditd_printk_skb: 40 callbacks suppressed
	[Oct 4 03:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:27:00.215476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-04T03:27:00.217042Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"1.236946ms","hash":1433174615,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2023424,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-04T03:27:00.217099Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1433174615,"revision":1514,"compact-revision":973}
	{"level":"info","ts":"2024-10-04T03:31:28.060845Z","caller":"traceutil/trace.go:171","msg":"trace[860112081] transaction","detail":"{read_only:false; response_revision:2637; number_of_response:1; }","duration":"112.489562ms","start":"2024-10-04T03:31:27.948335Z","end":"2024-10-04T03:31:28.060825Z","steps":["trace[860112081] 'process raft request'  (duration: 91.094323ms)","trace[860112081] 'compare'  (duration: 21.269614ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:31:44.553900Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T03:31:44.553958Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	{"level":"warn","ts":"2024-10-04T03:31:44.554007Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554028Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554108Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.562422Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T03:31:44.579712Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-10-04T03:31:44.581173Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581242Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581251Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [3a34ed1393f8] <==
	{"level":"info","ts":"2024-10-04T03:36:37.939034Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"b8c6c7563d17d844","added-peer-peer-urls":["https://192.169.0.5:2380"]}
	{"level":"info","ts":"2024-10-04T03:36:37.939522Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:36:37.939623Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:36:37.934723Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:36:37.946369Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T03:36:37.949462Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b8c6c7563d17d844","initial-advertise-peer-urls":["https://192.169.0.5:2380"],"listen-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T03:36:37.949681Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T03:36:37.950004Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:36:37.950064Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:36:39.488342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T03:36:39.488577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T03:36:39.488648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-10-04T03:36:39.488708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.488812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.488875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.488941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-10-04T03:36:39.490388Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-214000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:36:39.490461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:36:39.490481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:36:39.490648Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:36:39.491343Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:36:39.491918Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:36:39.492286Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:36:39.492642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:36:39.493003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	
	
	==> kernel <==
	 03:38:16 up 2 min,  0 users,  load average: 0.25, 0.18, 0.07
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:30:43.497755       1 main.go:299] handling current node
	I1004 03:30:53.496402       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:53.496595       1 main.go:299] handling current node
	I1004 03:30:53.496647       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:53.496795       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:03.496468       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:03.496619       1 main.go:299] handling current node
	I1004 03:31:03.496645       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:03.496656       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:13.497200       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:13.497236       1 main.go:299] handling current node
	I1004 03:31:13.497252       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:13.497259       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:23.497508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:23.497727       1 main.go:299] handling current node
	I1004 03:31:23.497777       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:23.497873       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:33.499104       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:33.499148       1 main.go:299] handling current node
	I1004 03:31:33.499160       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:33.499165       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:43.499561       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:43.499582       1 main.go:299] handling current node
	I1004 03:31:43.499592       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:43.499596       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f9fb6aeea4b6] <==
	I1004 03:37:07.023374       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:17.024006       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:17.024029       1 main.go:299] handling current node
	I1004 03:37:17.024041       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:17.024046       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:27.016102       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:27.016328       1 main.go:299] handling current node
	I1004 03:37:27.016461       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:27.016567       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:37.022430       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:37.022517       1 main.go:299] handling current node
	I1004 03:37:37.022541       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:37.022565       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:47.014804       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:47.015083       1 main.go:299] handling current node
	I1004 03:37:47.015247       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:47.015330       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:37:57.018026       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:37:57.018097       1 main.go:299] handling current node
	I1004 03:37:57.018115       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:37:57.018147       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:38:07.020920       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:38:07.021128       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:38:07.021544       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:38:07.021652       1 main.go:299] handling current node
	
	
	==> kube-apiserver [18a77afff888] <==
	I1004 03:36:40.325058       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1004 03:36:40.325213       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 03:36:40.334937       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1004 03:36:40.335017       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1004 03:36:40.364395       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:36:40.364604       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:36:40.364759       1 policy_source.go:224] refreshing policies
	I1004 03:36:40.374385       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:36:40.418647       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 03:36:40.423571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:36:40.423778       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:36:40.423914       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:36:40.424647       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:36:40.424699       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:36:40.425567       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:36:40.435139       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:36:40.435487       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:36:40.435554       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:36:40.435596       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:36:40.435678       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:36:40.437733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:36:41.323990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1004 03:36:41.538108       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:36:41.539664       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:36:41.543233       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [95af0d749f45] <==
	W1004 03:31:45.565832       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.564688       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565697       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.563880       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565578       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571249       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571472       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571633       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571818       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572042       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572188       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572478       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572615       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572727       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572882       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572220       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572633       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572900       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572324       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572348       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572056       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572740       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572444       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.573046       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572405       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	I1004 03:26:22.202705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:28:54.315206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:31:28.798824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-controller-manager [bf67ec881904] <==
	I1004 03:36:43.895282       1 shared_informer.go:320] Caches are synced for disruption
	I1004 03:36:43.902289       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:36:44.322305       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:36:44.400906       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:36:44.400944       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:36:44.533941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:36:44.711601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.09172ms"
	I1004 03:36:44.711874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.172109ms"
	I1004 03:36:44.712697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.643µs"
	I1004 03:36:44.714076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.195µs"
	I1004 03:36:44.767934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.149435ms"
	I1004 03:36:44.769462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.061µs"
	I1004 03:36:45.976463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.398204ms"
	I1004 03:36:45.976597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.63µs"
	I1004 03:36:45.994009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="29.016µs"
	I1004 03:36:46.014850       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.508µs"
	I1004 03:37:23.711256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:37:23.721566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:37:23.723577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.436153ms"
	I1004 03:37:23.723631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.552µs"
	I1004 03:37:24.931777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.362084ms"
	I1004 03:37:24.932459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.953µs"
	I1004 03:37:24.946696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.287142ms"
	I1004 03:37:24.946977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="189.563µs"
	I1004 03:37:28.753714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e870db0c09c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:36:45.742770       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:36:45.775231       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:36:45.775291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:36:45.922303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:36:45.922329       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:36:45.922347       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:36:45.927222       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:36:45.928115       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:36:45.928127       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:45.937610       1 config.go:199] "Starting service config controller"
	I1004 03:36:45.937639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:36:45.937654       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:36:45.937658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:36:45.937932       1 config.go:328] "Starting node config controller"
	I1004 03:36:45.937937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:36:46.038944       1 shared_informer.go:320] Caches are synced for node config
	I1004 03:36:46.039004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:36:46.051315       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5c0e6f76f23f] <==
	I1004 03:36:38.366946       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:36:40.340041       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:36:40.340076       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:36:40.340085       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:36:40.340089       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:36:40.388605       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:36:40.388643       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:40.391116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:36:40.391386       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:36:40.391458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:36:40.391415       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:36:40.493018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:31:44.485971       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1004 03:31:44.486818       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1004 03:31:44.487813       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1004 03:31:44.490023       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.527318    1532 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.527755    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.607587    1532 apiserver.go:52] "Watching apiserver"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.613117    1532 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-214000" podUID="ca847c50-343b-4c77-ab73-48b82beb80d0"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.616386    1532 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.628430    1532 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-214000"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.640305    1532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efb57dcf8e6b9d435d22c9f59e8f0eb" path="/var/lib/kubelet/pods/9efb57dcf8e6b9d435d22c9f59e8f0eb/volumes"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.650788    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f5e9cfaf-fc93-45bd-9061-cf51f9eef735-tmp\") pod \"storage-provisioner\" (UID: \"f5e9cfaf-fc93-45bd-9061-cf51f9eef735\") " pod="kube-system/storage-provisioner"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.650951    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-cni-cfg\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.651002    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-lib-modules\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.651585    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-xtables-lock\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652013    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-xtables-lock\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652345    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-lib-modules\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.667905    1532 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.744827    1532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-214000" podStartSLOduration=0.744814896 podStartE2EDuration="744.814896ms" podCreationTimestamp="2024-10-04 03:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 03:36:44.735075165 +0000 UTC m=+14.226297815" watchObservedRunningTime="2024-10-04 03:36:44.744814896 +0000 UTC m=+14.236037540"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296434    1532 scope.go:117] "RemoveContainer" containerID="792bd20fa10c95874d8ad89fc2ecf38b64e23df2d19d9b348cf3e9c46121c1b2"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296668    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: E1004 03:37:16.296799    1532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f5e9cfaf-fc93-45bd-9061-cf51f9eef735)\"" pod="kube-system/storage-provisioner" podUID="f5e9cfaf-fc93-45bd-9061-cf51f9eef735"
	Oct 04 03:37:26 ha-214000 kubelet[1532]: I1004 03:37:26.640771    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: I1004 03:37:30.670798    1532 scope.go:117] "RemoveContainer" containerID="2e5127305b39f8d6e99e701a21860eb86b129da510647193574f5beeb8153b48"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: E1004 03:37:30.694461    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-z5g4l
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l
helpers_test.go:282: (dbg) kubectl --context ha-214000 describe pod busybox-7dff88458-z5g4l:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-z5g4l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2k4pp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2k4pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  93s (x2 over 97s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  14m (x3 over 24m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m13s (x3 over 12m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (101.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-214000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-214000 --control-plane -v=7 --alsologtostderr: (1m37.695229915s)
ha_test.go:613: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 7 (397.494465ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-214000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-214000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	ha-214000-m04
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:39:55.843956    5070 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:39:55.844168    5070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:55.844173    5070 out.go:358] Setting ErrFile to fd 2...
	I1003 20:39:55.844177    5070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:55.844358    5070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:39:55.844536    5070 out.go:352] Setting JSON to false
	I1003 20:39:55.844559    5070 mustload.go:65] Loading cluster: ha-214000
	I1003 20:39:55.844597    5070 notify.go:220] Checking for updates...
	I1003 20:39:55.844923    5070 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:39:55.844942    5070 status.go:174] checking status of ha-214000 ...
	I1003 20:39:55.845359    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:55.845409    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:55.856829    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51915
	I1003 20:39:55.857168    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:55.857608    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:55.857646    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:55.857891    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:55.858029    5070 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:39:55.858126    5070 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:39:55.858191    5070 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4964
	I1003 20:39:55.859256    5070 status.go:371] ha-214000 host status = "Running" (err=<nil>)
	I1003 20:39:55.859275    5070 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:39:55.859538    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:55.859563    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:55.870537    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51917
	I1003 20:39:55.870875    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:55.871244    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:55.871256    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:55.871481    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:55.871588    5070 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:39:55.871681    5070 host.go:66] Checking if "ha-214000" exists ...
	I1003 20:39:55.871941    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:55.871968    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:55.882869    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51919
	I1003 20:39:55.883176    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:55.883506    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:55.883514    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:55.883752    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:55.883878    5070 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:39:55.884054    5070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:39:55.884080    5070 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:39:55.884172    5070 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:39:55.884259    5070 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:39:55.884359    5070 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:39:55.884455    5070 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:39:55.917228    5070 ssh_runner.go:195] Run: systemctl --version
	I1003 20:39:55.921499    5070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:39:55.932789    5070 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:39:55.932814    5070 api_server.go:166] Checking apiserver status ...
	I1003 20:39:55.932866    5070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:39:55.944248    5070 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1967/cgroup
	W1003 20:39:55.951656    5070 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1967/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:39:55.951728    5070 ssh_runner.go:195] Run: ls
	I1003 20:39:55.955742    5070 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:39:55.958992    5070 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:39:55.959004    5070 status.go:463] ha-214000 apiserver status = Running (err=<nil>)
	I1003 20:39:55.959010    5070 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:39:55.959020    5070 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:39:55.959286    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:55.959318    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:55.970421    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51923
	I1003 20:39:55.970784    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:55.971150    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:55.971163    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:55.971421    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:55.971519    5070 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:39:55.971600    5070 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:39:55.971671    5070 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4978
	I1003 20:39:55.972771    5070 status.go:371] ha-214000-m02 host status = "Running" (err=<nil>)
	I1003 20:39:55.972781    5070 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:39:55.973054    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:55.973078    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:55.984378    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51925
	I1003 20:39:55.984720    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:55.985053    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:55.985066    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:55.985280    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:55.985404    5070 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:39:55.985493    5070 host.go:66] Checking if "ha-214000-m02" exists ...
	I1003 20:39:55.985752    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:55.985773    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:55.996774    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51927
	I1003 20:39:55.997076    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:55.997415    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:55.997424    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:55.997641    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:55.997753    5070 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:39:55.997891    5070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:39:55.997902    5070 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:39:55.997982    5070 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:39:55.998064    5070 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:39:55.998151    5070 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:39:55.998237    5070 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:39:56.031680    5070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:39:56.042965    5070 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:39:56.042981    5070 api_server.go:166] Checking apiserver status ...
	I1003 20:39:56.043038    5070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1003 20:39:56.053235    5070 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:39:56.053248    5070 status.go:463] ha-214000-m02 apiserver status = Stopped (err=<nil>)
	I1003 20:39:56.053254    5070 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:39:56.053263    5070 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:39:56.053553    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:56.053576    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:56.065129    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51930
	I1003 20:39:56.065468    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:56.065799    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:56.065809    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:56.066008    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:56.066125    5070 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:39:56.066219    5070 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:39:56.066291    5070 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:39:56.067396    5070 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid 4114 missing from process table
	I1003 20:39:56.067419    5070 status.go:371] ha-214000-m03 host status = "Stopped" (err=<nil>)
	I1003 20:39:56.067428    5070 status.go:384] host is not running, skipping remaining checks
	I1003 20:39:56.067431    5070 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:39:56.067442    5070 status.go:174] checking status of ha-214000-m04 ...
	I1003 20:39:56.067704    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:56.067724    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:56.078859    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I1003 20:39:56.079190    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:56.079523    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:56.079534    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:56.079786    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:56.079910    5070 main.go:141] libmachine: (ha-214000-m04) Calling .GetState
	I1003 20:39:56.079997    5070 main.go:141] libmachine: (ha-214000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:39:56.080065    5070 main.go:141] libmachine: (ha-214000-m04) DBG | hyperkit pid from json: 5047
	I1003 20:39:56.081189    5070 status.go:371] ha-214000-m04 host status = "Running" (err=<nil>)
	I1003 20:39:56.081197    5070 host.go:66] Checking if "ha-214000-m04" exists ...
	I1003 20:39:56.081463    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:56.081487    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:56.092467    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51934
	I1003 20:39:56.092789    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:56.093138    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:56.093155    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:56.093372    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:56.093486    5070 main.go:141] libmachine: (ha-214000-m04) Calling .GetIP
	I1003 20:39:56.093584    5070 host.go:66] Checking if "ha-214000-m04" exists ...
	I1003 20:39:56.093865    5070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:39:56.093899    5070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:39:56.104650    5070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I1003 20:39:56.104999    5070 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:39:56.105338    5070 main.go:141] libmachine: Using API Version  1
	I1003 20:39:56.105346    5070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:39:56.105605    5070 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:39:56.105732    5070 main.go:141] libmachine: (ha-214000-m04) Calling .DriverName
	I1003 20:39:56.105890    5070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:39:56.105903    5070 main.go:141] libmachine: (ha-214000-m04) Calling .GetSSHHostname
	I1003 20:39:56.105984    5070 main.go:141] libmachine: (ha-214000-m04) Calling .GetSSHPort
	I1003 20:39:56.106064    5070 main.go:141] libmachine: (ha-214000-m04) Calling .GetSSHKeyPath
	I1003 20:39:56.106161    5070 main.go:141] libmachine: (ha-214000-m04) Calling .GetSSHUsername
	I1003 20:39:56.106240    5070 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m04/id_rsa Username:docker}
	I1003 20:39:56.141803    5070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:39:56.153067    5070 kubeconfig.go:125] found "ha-214000" server: "https://192.169.0.254:8443"
	I1003 20:39:56.153080    5070 api_server.go:166] Checking apiserver status ...
	I1003 20:39:56.153128    5070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:39:56.163554    5070 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2104/cgroup
	W1003 20:39:56.170647    5070 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:39:56.170714    5070 ssh_runner.go:195] Run: ls
	I1003 20:39:56.173752    5070 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1003 20:39:56.177126    5070 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1003 20:39:56.177137    5070 status.go:463] ha-214000-m04 apiserver status = Running (err=<nil>)
	I1003 20:39:56.177150    5070 status.go:176] ha-214000-m04 status: &{Name:ha-214000-m04 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:615: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (3.228494661s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node start m02 -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000 -v=7               | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-214000 -v=7                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT | 03 Oct 24 20:31 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	| node    | ha-214000 node delete m03 -v=7       | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-214000 stop -v=7                  | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT | 03 Oct 24 20:35 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true             | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:35 PDT |                     |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=hyperkit                    |           |         |         |                     |                     |
	| node    | add -p ha-214000                     | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:38 PDT | 03 Oct 24 20:39 PDT |
	|         | --control-plane -v=7                 |           |         |         |                     |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:35:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:35:48.304540    4951 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:48.304733    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304739    4951 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:48.304743    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304927    4951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:35:48.306332    4951 out.go:352] Setting JSON to false
	I1003 20:35:48.334066    4951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3918,"bootTime":1728009030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:35:48.334215    4951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:35:48.356076    4951 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:35:48.398703    4951 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:35:48.398800    4951 notify.go:220] Checking for updates...
	I1003 20:35:48.442667    4951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:35:48.463910    4951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:35:48.485340    4951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:35:48.506572    4951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:35:48.527740    4951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:35:48.550278    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:48.551029    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.551094    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.563226    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51755
	I1003 20:35:48.563804    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.564307    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.564319    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.564662    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.564822    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.565117    4951 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:35:48.565435    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.565487    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.576762    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51757
	I1003 20:35:48.577263    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.577677    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.577713    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.578069    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.578299    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.610723    4951 out.go:177] * Using the hyperkit driver based on existing profile
	I1003 20:35:48.652521    4951 start.go:297] selected driver: hyperkit
	I1003 20:35:48.652550    4951 start.go:901] validating driver "hyperkit" against &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.652818    4951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:35:48.653002    4951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.653249    4951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:35:48.665237    4951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:35:48.671535    4951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.671574    4951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:35:48.676549    4951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:35:48.676588    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:35:48.676625    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:35:48.676690    4951 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.1
69.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.676815    4951 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.698601    4951 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:35:48.740785    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:35:48.740857    4951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:35:48.740884    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:35:48.741146    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:35:48.741164    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:35:48.741343    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.742237    4951 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:35:48.742380    4951 start.go:364] duration metric: took 119.499µs to acquireMachinesLock for "ha-214000"
	I1003 20:35:48.742414    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:35:48.742428    4951 fix.go:54] fixHost starting: 
	I1003 20:35:48.742857    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.742889    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.754302    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51759
	I1003 20:35:48.754621    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.754990    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.755005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.755241    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.755370    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.755459    4951 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:35:48.755544    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.755632    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:35:48.756648    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.756678    4951 fix.go:112] recreateIfNeeded on ha-214000: state=Stopped err=<nil>
	I1003 20:35:48.756695    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	W1003 20:35:48.756784    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:35:48.778933    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000" ...
	I1003 20:35:48.800930    4951 main.go:141] libmachine: (ha-214000) Calling .Start
	I1003 20:35:48.801199    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.801247    4951 main.go:141] libmachine: (ha-214000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:35:48.803311    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.803325    4951 main.go:141] libmachine: (ha-214000) DBG | pid 4822 is in state "Stopped"
	I1003 20:35:48.803341    4951 main.go:141] libmachine: (ha-214000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid...
	I1003 20:35:48.803610    4951 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:35:48.922193    4951 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:35:48.922226    4951 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:35:48.922379    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922424    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922546    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:35:48.922605    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:35:48.922622    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:35:48.924313    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Pid is 4964
	I1003 20:35:48.924838    4951 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:35:48.924852    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.924911    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4964
	I1003 20:35:48.927353    4951 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:35:48.927405    4951 main.go:141] libmachine: (ha-214000) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:35:48.927432    4951 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6fc2}
	I1003 20:35:48.927443    4951 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:35:48.927454    4951 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:35:48.927543    4951 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:35:48.928494    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:35:48.928701    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.929276    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:35:48.929289    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.929410    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:35:48.929535    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:35:48.929649    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929777    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929900    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:35:48.930094    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:35:48.930303    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:35:48.930312    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:35:48.935400    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:35:48.990306    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:35:48.991238    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:48.991260    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:48.991278    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:48.991294    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.374490    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:35:49.374504    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:35:49.489812    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:49.489840    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:49.489854    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:49.489865    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.490699    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:35:49.490709    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:35:55.079541    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:35:55.079635    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:35:55.079652    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:35:55.103846    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:36:23.994265    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:36:23.994281    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994427    4951 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:36:23.994438    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994568    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:23.994676    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:23.994778    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994888    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994989    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:23.995134    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:23.995292    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:23.995301    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:36:24.061419    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:36:24.061438    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.061566    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.061665    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061761    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061855    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.062009    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.062160    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.062171    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:36:24.123229    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:36:24.123250    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:36:24.123267    4951 buildroot.go:174] setting up certificates
	I1003 20:36:24.123274    4951 provision.go:84] configureAuth start
	I1003 20:36:24.123280    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:24.123436    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:24.123534    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.123640    4951 provision.go:143] copyHostCerts
	I1003 20:36:24.123670    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123751    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:36:24.123759    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123933    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:36:24.124159    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124208    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:36:24.124213    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124299    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:36:24.124456    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124504    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:36:24.124508    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124593    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:36:24.124759    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:36:24.242470    4951 provision.go:177] copyRemoteCerts
	I1003 20:36:24.242536    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:36:24.242550    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.242680    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.242779    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.242882    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.242976    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:24.278106    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:36:24.278181    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:36:24.297749    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:36:24.297814    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 20:36:24.317337    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:36:24.317417    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:36:24.337360    4951 provision.go:87] duration metric: took 214.07513ms to configureAuth
	I1003 20:36:24.337374    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:36:24.337568    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:24.337582    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:24.337722    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.337811    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.337893    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.337973    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.338066    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.338199    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.338322    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.338329    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:36:24.392942    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:36:24.392953    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:36:24.393026    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:36:24.393038    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.393177    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.393275    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393375    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393458    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.393607    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.393746    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.393789    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:36:24.457890    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:36:24.457915    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.458049    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.458145    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458223    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458324    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.458459    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.458606    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.458617    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:36:26.102134    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:36:26.102148    4951 machine.go:96] duration metric: took 37.172864722s to provisionDockerMachine
	I1003 20:36:26.102162    4951 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:36:26.102174    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:36:26.102184    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.102399    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:36:26.102415    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.102503    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.102602    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.102703    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.102803    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.136711    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:36:26.139862    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:36:26.139874    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:36:26.139975    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:36:26.140193    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:36:26.140200    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:36:26.140451    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:36:26.147627    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:26.167774    4951 start.go:296] duration metric: took 65.6041ms for postStartSetup
	I1003 20:36:26.167794    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.167968    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:36:26.167979    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.168089    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.168182    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.168259    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.168350    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.202842    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:36:26.202914    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:36:26.255647    4951 fix.go:56] duration metric: took 37.513223093s for fixHost
	I1003 20:36:26.255670    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.255816    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.255918    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256012    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256105    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.256247    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:26.256399    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:26.256406    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:36:26.311780    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012986.433392977
	
	I1003 20:36:26.311792    4951 fix.go:216] guest clock: 1728012986.433392977
	I1003 20:36:26.311797    4951 fix.go:229] Guest: 2024-10-03 20:36:26.433392977 -0700 PDT Remote: 2024-10-03 20:36:26.25566 -0700 PDT m=+37.989104353 (delta=177.732977ms)
	I1003 20:36:26.311814    4951 fix.go:200] guest clock delta is within tolerance: 177.732977ms
	I1003 20:36:26.311818    4951 start.go:83] releasing machines lock for "ha-214000", held for 37.569431066s
	I1003 20:36:26.311838    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.311964    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:26.312074    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312353    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312465    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312560    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:36:26.312588    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312635    4951 ssh_runner.go:195] Run: cat /version.json
	I1003 20:36:26.312646    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312690    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312745    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312781    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312825    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312873    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.312925    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.313009    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.313022    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.345222    4951 ssh_runner.go:195] Run: systemctl --version
	I1003 20:36:26.396121    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:36:26.401139    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:36:26.401189    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:36:26.413838    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:36:26.413851    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.413956    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.430665    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:36:26.439518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:36:26.448241    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:36:26.448295    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:36:26.457135    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.465984    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:36:26.474764    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.483576    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:36:26.492518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:36:26.501284    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:36:26.510114    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:36:26.518992    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:36:26.527133    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:36:26.527188    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:36:26.536233    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:36:26.544367    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:26.641761    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:36:26.661796    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.661912    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:36:26.678816    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.689242    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:36:26.701530    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.713140    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.724511    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:36:26.748353    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.759647    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.774287    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:36:26.777216    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:36:26.785211    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:36:26.800364    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:36:26.895359    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:36:27.004148    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:36:27.004239    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:36:27.018268    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:27.118971    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:36:29.441016    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.322026405s)
	I1003 20:36:29.441097    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:36:29.451786    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.462092    4951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:36:29.564537    4951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:36:29.669649    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.781720    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:36:29.795175    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.806194    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.917885    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:36:29.986582    4951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:36:29.986686    4951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:36:29.991213    4951 start.go:563] Will wait 60s for crictl version
	I1003 20:36:29.991273    4951 ssh_runner.go:195] Run: which crictl
	I1003 20:36:29.994306    4951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:36:30.019989    4951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:36:30.020072    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.036824    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.075524    4951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:36:30.075569    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:30.076023    4951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:36:30.080492    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.091206    4951 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:36:30.091284    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:30.091356    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.103771    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.103786    4951 docker.go:615] Images already preloaded, skipping extraction
	I1003 20:36:30.103870    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.126324    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.126343    4951 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:36:30.126351    4951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:36:30.126423    4951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:36:30.126505    4951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:36:30.165944    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:36:30.165958    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:36:30.165970    4951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:36:30.165987    4951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:36:30.166068    4951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:36:30.166080    4951 kube-vip.go:115] generating kube-vip config ...
	I1003 20:36:30.166149    4951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:36:30.180124    4951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:36:30.180189    4951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:36:30.180256    4951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:36:30.189222    4951 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:36:30.189287    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:36:30.198523    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:36:30.212259    4951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:36:30.225613    4951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:36:30.239086    4951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1003 20:36:30.252640    4951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:36:30.255560    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.265017    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:30.361055    4951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:36:30.373903    4951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:36:30.373915    4951 certs.go:194] generating shared ca certs ...
	I1003 20:36:30.373925    4951 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.374133    4951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:36:30.374229    4951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:36:30.374245    4951 certs.go:256] generating profile certs ...
	I1003 20:36:30.374372    4951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:36:30.374395    4951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9
	I1003 20:36:30.374412    4951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1003 20:36:30.510048    4951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 ...
	I1003 20:36:30.510064    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9: {Name:mkec630c178c10067131af2c5f3c9dd0e1fb1860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510503    4951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 ...
	I1003 20:36:30.510513    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9: {Name:mk3eade5c23e406463c386755ec0dc38e869ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510763    4951 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:36:30.511004    4951 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:36:30.511276    4951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:36:30.511286    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:36:30.511308    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:36:30.511328    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:36:30.511347    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:36:30.511373    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:36:30.511393    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:36:30.511411    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:36:30.511428    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:36:30.511527    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:36:30.511580    4951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:36:30.511594    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:36:30.511627    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:36:30.511660    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:36:30.511688    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:36:30.511757    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:30.511791    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.511811    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.511829    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.512286    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:36:30.547800    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:36:30.588463    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:36:30.624659    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:36:30.646082    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:36:30.665519    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:36:30.684966    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:36:30.704971    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:36:30.724730    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:36:30.744135    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:36:30.763735    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:36:30.782963    4951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:36:30.796275    4951 ssh_runner.go:195] Run: openssl version
	I1003 20:36:30.800456    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:36:30.808784    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812168    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812211    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.816317    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:36:30.824743    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:36:30.833176    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836568    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836613    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.840895    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:36:30.849202    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:36:30.857643    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861134    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861184    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.865411    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:36:30.873865    4951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:36:30.877389    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:36:30.881788    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:36:30.886088    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:36:30.890422    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:36:30.894596    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:36:30.898773    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:36:30.902881    4951 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:36:30.902998    4951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:36:30.915213    4951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:36:30.923319    4951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:36:30.923331    4951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:36:30.923384    4951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:36:30.930635    4951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:36:30.930978    4951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.931055    4951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1440/kubeconfig needs updating (will repair): [kubeconfig missing "ha-214000" cluster setting kubeconfig missing "ha-214000" context setting]
	I1003 20:36:30.931232    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.931928    4951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.932136    4951 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd994f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:36:30.932465    4951 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:36:30.932658    4951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:36:30.939898    4951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1003 20:36:30.939909    4951 kubeadm.go:597] duration metric: took 16.574315ms to restartPrimaryControlPlane
	I1003 20:36:30.939914    4951 kubeadm.go:394] duration metric: took 37.038509ms to StartCluster
	I1003 20:36:30.939939    4951 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940028    4951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.940366    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940584    4951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:36:30.940597    4951 start.go:241] waiting for startup goroutines ...
	I1003 20:36:30.940605    4951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:36:30.940715    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:30.982685    4951 out.go:177] * Enabled addons: 
	I1003 20:36:31.003752    4951 addons.go:510] duration metric: took 63.132383ms for enable addons: enabled=[]
	I1003 20:36:31.003791    4951 start.go:246] waiting for cluster config update ...
	I1003 20:36:31.003802    4951 start.go:255] writing updated cluster config ...
	I1003 20:36:31.026641    4951 out.go:201] 
	I1003 20:36:31.047648    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:31.047721    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.069716    4951 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:36:31.111550    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:31.111584    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:36:31.111814    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:36:31.111847    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:36:31.111978    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.113032    4951 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:36:31.113184    4951 start.go:364] duration metric: took 124.813µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:36:31.113203    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:36:31.113208    4951 fix.go:54] fixHost starting: m02
	I1003 20:36:31.113580    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:36:31.113606    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:36:31.125064    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I1003 20:36:31.125517    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:36:31.125993    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:36:31.126005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:36:31.126252    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:36:31.126414    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.126604    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:36:31.126798    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.126890    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:36:31.127965    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:36:31.127999    4951 fix.go:112] recreateIfNeeded on ha-214000-m02: state=Stopped err=<nil>
	I1003 20:36:31.128009    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	W1003 20:36:31.128129    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:36:31.170879    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000-m02" ...
	I1003 20:36:31.191480    4951 main.go:141] libmachine: (ha-214000-m02) Calling .Start
	I1003 20:36:31.191791    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.191820    4951 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:36:31.191892    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:36:31.219578    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:36:31.219600    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:36:31.219761    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219849    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:36:31.219889    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:36:31.219902    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:36:31.221267    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Pid is 4978
	I1003 20:36:31.221656    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:36:31.221669    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.221749    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4978
	I1003 20:36:31.222942    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:36:31.223055    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:36:31.223074    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 20:36:31.223092    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff619f}
	I1003 20:36:31.223117    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:36:31.223134    4951 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:36:31.223155    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:36:31.223858    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:36:31.224037    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.224458    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:36:31.224468    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.224583    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:36:31.224679    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:36:31.224777    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.224929    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.225026    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:36:31.225183    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:31.225340    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:36:31.225347    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:36:31.232364    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:36:31.241337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:36:31.242541    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.242561    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.242572    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.242585    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.630094    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:36:31.630110    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:36:31.744778    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.744796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.744827    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.744846    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.745666    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:36:31.745681    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:36:37.337247    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:36:37.337337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:36:37.337350    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:36:37.361028    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:37:06.292112    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:37:06.292127    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292262    4951 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:37:06.292277    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292374    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.292454    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.292532    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292617    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292696    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.292835    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.292968    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.292976    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:37:06.362584    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:37:06.362599    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.362740    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.362851    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.362945    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.363048    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.363204    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.363366    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.363377    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:37:06.429246    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:37:06.429262    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:37:06.429275    4951 buildroot.go:174] setting up certificates
	I1003 20:37:06.429281    4951 provision.go:84] configureAuth start
	I1003 20:37:06.429287    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.429430    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:06.429529    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.429617    4951 provision.go:143] copyHostCerts
	I1003 20:37:06.429649    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429696    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:37:06.429701    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429820    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:37:06.430049    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430079    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:37:06.430084    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430193    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:37:06.430369    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430399    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:37:06.430404    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430485    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:37:06.430651    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:37:06.504641    4951 provision.go:177] copyRemoteCerts
	I1003 20:37:06.504702    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:37:06.504733    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.504884    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.504988    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.505086    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.505168    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:06.541867    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:37:06.541936    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:37:06.560930    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:37:06.560992    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:37:06.579917    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:37:06.579984    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:37:06.599634    4951 provision.go:87] duration metric: took 170.34603ms to configureAuth
	I1003 20:37:06.599649    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:37:06.599816    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:37:06.599829    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:06.599963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.600044    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.600140    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600213    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600306    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.600434    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.600557    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.600564    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:37:06.660138    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:37:06.660150    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:37:06.660232    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:37:06.660242    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.660378    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.660498    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660607    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660708    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.660861    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.661001    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.661049    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:37:06.728946    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:37:06.728963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.729096    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.729209    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729384    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.729544    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.729682    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.729693    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:37:08.289911    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:37:08.289925    4951 machine.go:96] duration metric: took 37.065461315s to provisionDockerMachine
	I1003 20:37:08.289933    4951 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:37:08.289944    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:37:08.289954    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.290150    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:37:08.290163    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.290256    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.290347    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.290425    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.290523    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.325637    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:37:08.328747    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:37:08.328757    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:37:08.328838    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:37:08.328975    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:37:08.328981    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:37:08.329139    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:37:08.336279    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:37:08.355765    4951 start.go:296] duration metric: took 65.822719ms for postStartSetup
	I1003 20:37:08.355783    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.355979    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:37:08.355992    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.356088    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.356171    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.356261    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.356337    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.391155    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:37:08.391224    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:37:08.443555    4951 fix.go:56] duration metric: took 37.330343063s for fixHost
	I1003 20:37:08.443608    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.443871    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.444091    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.444747    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:08.444947    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:08.444959    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:37:08.504053    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013028.627108120
	
	I1003 20:37:08.504066    4951 fix.go:216] guest clock: 1728013028.627108120
	I1003 20:37:08.504071    4951 fix.go:229] Guest: 2024-10-03 20:37:08.62710812 -0700 PDT Remote: 2024-10-03 20:37:08.443578 -0700 PDT m=+80.177024984 (delta=183.53012ms)
	I1003 20:37:08.504082    4951 fix.go:200] guest clock delta is within tolerance: 183.53012ms
	I1003 20:37:08.504087    4951 start.go:83] releasing machines lock for "ha-214000-m02", held for 37.390896714s
	I1003 20:37:08.504111    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.504258    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:08.525607    4951 out.go:177] * Found network options:
	I1003 20:37:08.567619    4951 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:37:08.588274    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.588315    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589205    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589467    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589610    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:37:08.589649    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:37:08.589687    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.589812    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:37:08.589832    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.589864    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590034    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590064    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590259    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590278    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590517    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.590537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590701    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:37:08.623322    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:37:08.623398    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:37:08.670987    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:37:08.671009    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.671107    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:08.687184    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:37:08.696174    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:37:08.705216    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:37:08.705268    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:37:08.714371    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.723383    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:37:08.732289    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.741295    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:37:08.750471    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:37:08.759323    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:37:08.768482    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:37:08.777704    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:37:08.785806    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:37:08.785866    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:37:08.794894    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:37:08.803171    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:08.897940    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:37:08.916833    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.916918    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:37:08.930156    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.942286    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:37:08.960158    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.971885    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:08.982659    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:37:08.999726    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:09.010351    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:09.025433    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:37:09.028502    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:37:09.035822    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:37:09.049466    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:37:09.162468    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:37:09.273558    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:37:09.273582    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:37:09.288188    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:09.384897    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:38:10.406862    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021950572s)
	I1003 20:38:10.406948    4951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:38:10.444120    4951 out.go:201] 
	W1003 20:38:10.464959    4951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:37:06 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391345461Z" level=info msg="Starting up"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391833106Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.395520305Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.412871636Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427882861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427981520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428050653Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428085226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428277072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428327604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428478894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428520070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428552138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428580964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428720722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428931280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430522141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430571354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430698188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430740032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430878079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430929217Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431351881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431440610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431485738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431519039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431551337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431619359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431825238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431902729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431941069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431978377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432012357Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432042063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432070459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432099321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432133473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432169855Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432202720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432268312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432315741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432351145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432383859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432414347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432447070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432476073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432510884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432548105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432578396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432608431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432640682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432669603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432698487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432729184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432768850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432801425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432829061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432911216Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432958882Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432989050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433017196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433045319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433074497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433102613Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433279017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433339149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433390358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433425703Z" level=info msg="containerd successfully booted in 0.021412s"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.415071774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.421056219Z" level=info msg="Loading containers: start."
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.500314931Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.331296883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.376605057Z" level=info msg="Loading containers: done."
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387546240Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387606581Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387647157Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387769053Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411526135Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411682523Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:37:08 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527035720Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:37:09 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527893788Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528149338Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528188105Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528221468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:10 ha-214000-m02 dockerd[929]: time="2024-10-04T03:37:10.559000347Z" level=info msg="Starting up"
	Oct 04 03:38:10 ha-214000-m02 dockerd[929]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:38:10.465066    4951 out.go:270] * 
	W1003 20:38:10.466299    4951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:38:10.543824    4951 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547509520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547587217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547600394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547679278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 cri-dockerd[1370]: time="2024-10-04T03:36:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc56c1f3c299c74527bc4bad7199ef2947f06a7fa736aaf71ff605e8aa07e0ac/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613336411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613472160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613483466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613584473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646020305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646150537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646177738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646306268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688829574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688917158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688931527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.689001023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:15 ha-214000 dockerd[1116]: time="2024-10-04T03:37:15.932971006Z" level=info msg="ignoring event" container=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933175492Z" level=info msg="shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933213613Z" level=warning msg="cleaning up after shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933220107Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691601551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691989127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692103682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692303810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ebd8d90ba3e8f       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   f61fdecdb5ed1       storage-provisioner
	f9fb6aeea4b68       12968670680f4                                                                                         3 minutes ago       Running             kindnet-cni               1                   fc56c1f3c299c       kindnet-flq8x
	e388df4554b33       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   2401eafe0bd31       busybox-7dff88458-m7hqf
	e6ef332ed5737       c69fa2e9cbf5f                                                                                         3 minutes ago       Running             coredns                   1                   c142be8b44551       coredns-7c65d6cfc9-slrtf
	985956e1cb3da       c69fa2e9cbf5f                                                                                         3 minutes ago       Running             coredns                   1                   bf51af5037cab       coredns-7c65d6cfc9-l4wpg
	666390dc434d9       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   f61fdecdb5ed1       storage-provisioner
	e870db0c09c44       60c005f310ff3                                                                                         3 minutes ago       Running             kube-proxy                1                   69d6d030cf38a       kube-proxy-grxks
	2bccf57dd1cf7       18b729c2288dc                                                                                         3 minutes ago       Running             kube-vip                  0                   013ce7946a369       kube-vip-ha-214000
	5c0e6f76f23f0       9aa1fad941575                                                                                         3 minutes ago       Running             kube-scheduler            1                   3b875ceff5048       kube-scheduler-ha-214000
	3a34ed1393f8c       2e96e5913fc06                                                                                         3 minutes ago       Running             etcd                      1                   9863db4133f6a       etcd-ha-214000
	18a77afff888c       6bab7719df100                                                                                         3 minutes ago       Running             kube-apiserver            1                   61526ecfca3d5       kube-apiserver-ha-214000
	bf67ec881904c       175ffd71cce3d                                                                                         3 minutes ago       Running             kube-controller-manager   1                   e9d8b9ee53b05       kube-controller-manager-ha-214000
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   26 minutes ago      Exited              busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              27 minutes ago      Exited              kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         27 minutes ago      Exited              kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	95af0d749f454       6bab7719df100                                                                                         27 minutes ago      Exited              kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         27 minutes ago      Exited              kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         27 minutes ago      Exited              etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [985956e1cb3d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58650 - 4380 "HINFO IN 7121940411115309935.5046063770853036442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044429796s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[262325284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.911) (total time: 30001ms):
	Trace[262325284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[262325284]: [30.001612351s] [30.001612351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1313713214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30002ms):
	Trace[1313713214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.914)
	Trace[1313713214]: [30.00243392s] [30.00243392s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1235317752]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30003ms):
	Trace[1235317752]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.915)
	Trace[1235317752]: [30.003174126s] [30.003174126s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6ef332ed573] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60885 - 59353 "HINFO IN 8975973012052876199.2679720306794618198. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011598991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1634039844]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30003ms):
	Trace[1634039844]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.913)
	Trace[1634039844]: [30.003123911s] [30.003123911s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1181919593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30001ms):
	Trace[1181919593]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.914)
	Trace[1181919593]: [30.001966872s] [30.001966872s] END
	[INFO] plugin/kubernetes: Trace[1826819322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30001ms):
	Trace[1826819322]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[1826819322]: [30.001980832s] [30.001980832s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:39:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cf440f8eb534a62b20c31c760022e88
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    a841ad05-f0b0-46f0-962d-fb6544f3eb77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m (x3 over 27m)      kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x3 over 27m)      kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x2 over 27m)      kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                    node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                27m                    kubelet          Node ha-214000 status is now: NodeReady
	  Normal  Starting                 3m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m27s (x8 over 3m27s)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x8 over 3m27s)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x7 over 3m27s)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  RegisteredNode           19s                    node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                14m                kubelet          Node ha-214000-m03 status is now: NodeReady
	  Normal  RegisteredNode           3m14s              node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeNotReady             2m34s              node-controller  Node ha-214000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           19s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	
	
	Name:               ha-214000-m04
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_39_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:39:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:39:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:39:51 +0000   Fri, 04 Oct 2024 03:39:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:39:51 +0000   Fri, 04 Oct 2024 03:39:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:39:51 +0000   Fri, 04 Oct 2024 03:39:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:39:51 +0000   Fri, 04 Oct 2024 03:39:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-214000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 74e90d24604f4309936c08b342fe3bb8
	  System UUID:                7b814f14-0000-0000-bee8-97f30534dce1
	  Boot ID:                    55c0a0f6-9d6d-4ccc-b298-66977105d627
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z5g4l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 etcd-ha-214000-m04                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         25s
	  kube-system                 kindnet-lmxhp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27s
	  kube-system                 kube-apiserver-ha-214000-m04             250m (12%)    0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-ha-214000-m04    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-t2c5z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-ha-214000-m04             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-vip-ha-214000-m04                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27s (x8 over 28s)  kubelet          Node ha-214000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 28s)  kubelet          Node ha-214000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 28s)  kubelet          Node ha-214000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node ha-214000-m04 event: Registered Node ha-214000-m04 in Controller
	  Normal  RegisteredNode           20s                node-controller  Node ha-214000-m04 event: Registered Node ha-214000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036162] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007697] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.692433] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006843] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.638785] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.210765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 4 03:36] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.104887] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +1.918276] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.255425] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +0.098027] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.126300] systemd-fstab-generator[1107]: Ignoring "noauto" option for root device
	[  +2.450359] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.101909] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.051667] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.054007] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +0.136592] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.448600] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +6.747516] kauditd_printk_skb: 88 callbacks suppressed
	[  +7.915746] kauditd_printk_skb: 40 callbacks suppressed
	[Oct 4 03:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:27:00.215476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-04T03:27:00.217042Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"1.236946ms","hash":1433174615,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2023424,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-04T03:27:00.217099Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1433174615,"revision":1514,"compact-revision":973}
	{"level":"info","ts":"2024-10-04T03:31:28.060845Z","caller":"traceutil/trace.go:171","msg":"trace[860112081] transaction","detail":"{read_only:false; response_revision:2637; number_of_response:1; }","duration":"112.489562ms","start":"2024-10-04T03:31:27.948335Z","end":"2024-10-04T03:31:28.060825Z","steps":["trace[860112081] 'process raft request'  (duration: 91.094323ms)","trace[860112081] 'compare'  (duration: 21.269614ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:31:44.553900Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T03:31:44.553958Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	{"level":"warn","ts":"2024-10-04T03:31:44.554007Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554028Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554108Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.562422Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T03:31:44.579712Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-10-04T03:31:44.581173Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581242Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581251Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [3a34ed1393f8] <==
	{"level":"info","ts":"2024-10-04T03:36:39.492642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:36:39.493003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:39:31.478440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860) learners=(6196648108447213699)"}
	{"level":"info","ts":"2024-10-04T03:39:31.478753Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"55feea9f9617a883","added-peer-peer-urls":["https://192.169.0.8:2380"]}
	{"level":"info","ts":"2024-10-04T03:39:31.479001Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.479121Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.479669Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.479836Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883","remote-peer-urls":["https://192.169.0.8:2380"]}
	{"level":"info","ts":"2024-10-04T03:39:31.479972Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480272Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480395Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480489Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480839Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"warn","ts":"2024-10-04T03:39:31.535942Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.8:2380/version","remote-member-id":"55feea9f9617a883","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-04T03:39:31.536046Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"55feea9f9617a883","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"info","ts":"2024-10-04T03:39:32.311652Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.312234Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.312583Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.323139Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"55feea9f9617a883","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-10-04T03:39:32.323311Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.333366Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"55feea9f9617a883","stream-type":"stream Message"}
	{"level":"info","ts":"2024-10-04T03:39:32.333407Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:33.013374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(6196648108447213699 13314548521573537860)"}
	{"level":"info","ts":"2024-10-04T03:39:33.013685Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-10-04T03:39:33.013729Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"55feea9f9617a883"}
	
	
	==> kernel <==
	 03:39:58 up 4 min,  0 users,  load average: 0.15, 0.16, 0.08
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:30:43.497755       1 main.go:299] handling current node
	I1004 03:30:53.496402       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:53.496595       1 main.go:299] handling current node
	I1004 03:30:53.496647       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:53.496795       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:03.496468       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:03.496619       1 main.go:299] handling current node
	I1004 03:31:03.496645       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:03.496656       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:13.497200       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:13.497236       1 main.go:299] handling current node
	I1004 03:31:13.497252       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:13.497259       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:23.497508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:23.497727       1 main.go:299] handling current node
	I1004 03:31:23.497777       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:23.497873       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:33.499104       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:33.499148       1 main.go:299] handling current node
	I1004 03:31:33.499160       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:33.499165       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:43.499561       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:43.499582       1 main.go:299] handling current node
	I1004 03:31:43.499592       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:43.499596       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f9fb6aeea4b6] <==
	I1004 03:39:17.017865       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:17.017946       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:27.017939       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:27.017995       1 main.go:299] handling current node
	I1004 03:39:27.018014       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:27.018027       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:37.016464       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:37.016506       1 main.go:299] handling current node
	I1004 03:39:37.016518       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:37.016523       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:37.016805       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I1004 03:39:37.016835       1 main.go:322] Node ha-214000-m04 has CIDR [10.244.2.0/24] 
	I1004 03:39:37.016917       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0} 
	I1004 03:39:47.017663       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:47.017736       1 main.go:299] handling current node
	I1004 03:39:47.017812       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:47.017826       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:47.018266       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I1004 03:39:47.018288       1 main.go:322] Node ha-214000-m04 has CIDR [10.244.2.0/24] 
	I1004 03:39:57.014884       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:57.014922       1 main.go:299] handling current node
	I1004 03:39:57.014933       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:57.014940       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:57.015044       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I1004 03:39:57.015071       1 main.go:322] Node ha-214000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [18a77afff888] <==
	I1004 03:36:40.325058       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1004 03:36:40.325213       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 03:36:40.334937       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1004 03:36:40.335017       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1004 03:36:40.364395       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:36:40.364604       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:36:40.364759       1 policy_source.go:224] refreshing policies
	I1004 03:36:40.374385       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:36:40.418647       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 03:36:40.423571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:36:40.423778       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:36:40.423914       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:36:40.424647       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:36:40.424699       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:36:40.425567       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:36:40.435139       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:36:40.435487       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:36:40.435554       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:36:40.435596       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:36:40.435678       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:36:40.437733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:36:41.323990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1004 03:36:41.538108       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:36:41.539664       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:36:41.543233       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [95af0d749f45] <==
	W1004 03:31:45.565832       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.564688       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565697       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.563880       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565578       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571249       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571472       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571633       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571818       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572042       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572188       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572478       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572615       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572727       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572882       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572220       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572633       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572900       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572324       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572348       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572056       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572740       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572444       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.573046       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572405       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	I1004 03:26:22.202705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:28:54.315206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:31:28.798824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-controller-manager [bf67ec881904] <==
	I1004 03:39:31.158132       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m04" podCIDRs=["10.244.2.0/24"]
	I1004 03:39:31.158171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:31.158188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:31.170984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:31.444061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.099140       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.689288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.768707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.771502       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m04"
	I1004 03:39:33.862969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:35.375953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.231µs"
	I1004 03:39:38.447222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:39:38.458713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:38.516627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:39:41.353245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:43.944945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:48.603231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:51.531963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:51.539153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:51.544111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.605µs"
	I1004 03:39:51.554478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.021µs"
	I1004 03:39:51.560860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.67µs"
	I1004 03:39:53.474437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:58.478434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.285042ms"
	I1004 03:39:58.478539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.202µs"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e870db0c09c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:36:45.742770       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:36:45.775231       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:36:45.775291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:36:45.922303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:36:45.922329       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:36:45.922347       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:36:45.927222       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:36:45.928115       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:36:45.928127       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:45.937610       1 config.go:199] "Starting service config controller"
	I1004 03:36:45.937639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:36:45.937654       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:36:45.937658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:36:45.937932       1 config.go:328] "Starting node config controller"
	I1004 03:36:45.937937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:36:46.038944       1 shared_informer.go:320] Caches are synced for node config
	I1004 03:36:46.039004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:36:46.051315       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5c0e6f76f23f] <==
	I1004 03:36:38.366946       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:36:40.340041       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:36:40.340076       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:36:40.340085       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:36:40.340089       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:36:40.388605       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:36:40.388643       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:40.391116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:36:40.391386       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:36:40.391458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:36:40.391415       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:36:40.493018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:31:44.485971       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1004 03:31:44.486818       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1004 03:31:44.487813       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1004 03:31:44.490023       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.651585    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-xtables-lock\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652013    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-xtables-lock\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652345    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-lib-modules\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.667905    1532 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.744827    1532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-214000" podStartSLOduration=0.744814896 podStartE2EDuration="744.814896ms" podCreationTimestamp="2024-10-04 03:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 03:36:44.735075165 +0000 UTC m=+14.226297815" watchObservedRunningTime="2024-10-04 03:36:44.744814896 +0000 UTC m=+14.236037540"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296434    1532 scope.go:117] "RemoveContainer" containerID="792bd20fa10c95874d8ad89fc2ecf38b64e23df2d19d9b348cf3e9c46121c1b2"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296668    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: E1004 03:37:16.296799    1532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f5e9cfaf-fc93-45bd-9061-cf51f9eef735)\"" pod="kube-system/storage-provisioner" podUID="f5e9cfaf-fc93-45bd-9061-cf51f9eef735"
	Oct 04 03:37:26 ha-214000 kubelet[1532]: I1004 03:37:26.640771    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: I1004 03:37:30.670798    1532 scope.go:117] "RemoveContainer" containerID="2e5127305b39f8d6e99e701a21860eb86b129da510647193574f5beeb8153b48"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: E1004 03:37:30.694461    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:38:30 ha-214000 kubelet[1532]: E1004 03:38:30.681723    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:38:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:38:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:38:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:38:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:39:30 ha-214000 kubelet[1532]: E1004 03:39:30.681080    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:39:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:39:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:39:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:39:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (101.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:309: expected profile "ha-214000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-214000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-214000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":26214
4,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-214000 -n ha-214000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 logs -n 25: (3.525802389s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- get pods -o          | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-9tvdj              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:24 PDT |
	|         | busybox-7dff88458-m7hqf -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-214000 -- exec                 | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT |                     |
	|         | busybox-7dff88458-z5g4l              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-214000 -v=7                | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:24 PDT | 03 Oct 24 20:25 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node stop m02 -v=7         | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT | 03 Oct 24 20:26 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-214000 node start m02 -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:26 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000 -v=7               | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-214000 -v=7                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT | 03 Oct 24 20:31 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true -v=7        | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-214000                    | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	| node    | ha-214000 node delete m03 -v=7       | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-214000 stop -v=7                  | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:33 PDT | 03 Oct 24 20:35 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-214000 --wait=true             | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:35 PDT |                     |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=hyperkit                    |           |         |         |                     |                     |
	| node    | add -p ha-214000                     | ha-214000 | jenkins | v1.34.0 | 03 Oct 24 20:38 PDT | 03 Oct 24 20:39 PDT |
	|         | --control-plane -v=7                 |           |         |         |                     |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:35:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:35:48.304540    4951 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:48.304733    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304739    4951 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:48.304743    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.304927    4951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:35:48.306332    4951 out.go:352] Setting JSON to false
	I1003 20:35:48.334066    4951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3918,"bootTime":1728009030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:35:48.334215    4951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:35:48.356076    4951 out.go:177] * [ha-214000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:35:48.398703    4951 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:35:48.398800    4951 notify.go:220] Checking for updates...
	I1003 20:35:48.442667    4951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:35:48.463910    4951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:35:48.485340    4951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:35:48.506572    4951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:35:48.527740    4951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:35:48.550278    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:48.551029    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.551094    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.563226    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51755
	I1003 20:35:48.563804    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.564307    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.564319    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.564662    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.564822    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.565117    4951 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:35:48.565435    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.565487    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.576762    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51757
	I1003 20:35:48.577263    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.577677    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.577713    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.578069    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.578299    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.610723    4951 out.go:177] * Using the hyperkit driver based on existing profile
	I1003 20:35:48.652521    4951 start.go:297] selected driver: hyperkit
	I1003 20:35:48.652550    4951 start.go:901] validating driver "hyperkit" against &{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.652818    4951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:35:48.653002    4951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.653249    4951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 20:35:48.665237    4951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 20:35:48.671535    4951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.671574    4951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 20:35:48.676549    4951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:35:48.676588    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:35:48.676625    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:35:48.676690    4951 start.go:340] cluster config:
	{Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.1
69.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:48.676815    4951 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:48.698601    4951 out.go:177] * Starting "ha-214000" primary control-plane node in "ha-214000" cluster
	I1003 20:35:48.740785    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:35:48.740857    4951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 20:35:48.740884    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:35:48.741146    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:35:48.741164    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:35:48.741343    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.742237    4951 start.go:360] acquireMachinesLock for ha-214000: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:35:48.742380    4951 start.go:364] duration metric: took 119.499µs to acquireMachinesLock for "ha-214000"
	I1003 20:35:48.742414    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:35:48.742428    4951 fix.go:54] fixHost starting: 
	I1003 20:35:48.742857    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.742889    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.754302    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51759
	I1003 20:35:48.754621    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.754990    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.755005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.755241    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.755370    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.755459    4951 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:35:48.755544    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.755632    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:35:48.756648    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.756678    4951 fix.go:112] recreateIfNeeded on ha-214000: state=Stopped err=<nil>
	I1003 20:35:48.756695    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	W1003 20:35:48.756784    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:35:48.778933    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000" ...
	I1003 20:35:48.800930    4951 main.go:141] libmachine: (ha-214000) Calling .Start
	I1003 20:35:48.801199    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.801247    4951 main.go:141] libmachine: (ha-214000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid
	I1003 20:35:48.803311    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.803325    4951 main.go:141] libmachine: (ha-214000) DBG | pid 4822 is in state "Stopped"
	I1003 20:35:48.803341    4951 main.go:141] libmachine: (ha-214000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid...
	I1003 20:35:48.803610    4951 main.go:141] libmachine: (ha-214000) DBG | Using UUID 34c8a14a-13f3-4010-ae73-2f65fb092988
	I1003 20:35:48.922193    4951 main.go:141] libmachine: (ha-214000) DBG | Generated MAC a:aa:e8:3c:fe:20
	I1003 20:35:48.922226    4951 main.go:141] libmachine: (ha-214000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:35:48.922379    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922424    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"34c8a14a-13f3-4010-ae73-2f65fb092988", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cff20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:35:48.922546    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "34c8a14a-13f3-4010-ae73-2f65fb092988", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:35:48.922605    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 34c8a14a-13f3-4010-ae73-2f65fb092988 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/ha-214000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:35:48.922622    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:35:48.924313    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 DEBUG: hyperkit: Pid is 4964
	I1003 20:35:48.924838    4951 main.go:141] libmachine: (ha-214000) DBG | Attempt 0
	I1003 20:35:48.924852    4951 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.924911    4951 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4964
	I1003 20:35:48.927353    4951 main.go:141] libmachine: (ha-214000) DBG | Searching for a:aa:e8:3c:fe:20 in /var/db/dhcpd_leases ...
	I1003 20:35:48.927405    4951 main.go:141] libmachine: (ha-214000) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:35:48.927432    4951 main.go:141] libmachine: (ha-214000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff6fc2}
	I1003 20:35:48.927443    4951 main.go:141] libmachine: (ha-214000) DBG | Found match: a:aa:e8:3c:fe:20
	I1003 20:35:48.927454    4951 main.go:141] libmachine: (ha-214000) DBG | IP: 192.169.0.5
	I1003 20:35:48.927543    4951 main.go:141] libmachine: (ha-214000) Calling .GetConfigRaw
	I1003 20:35:48.928494    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:35:48.928701    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:35:48.929276    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:35:48.929289    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:35:48.929410    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:35:48.929535    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:35:48.929649    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929777    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:35:48.929900    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:35:48.930094    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:35:48.930303    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:35:48.930312    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:35:48.935400    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:35:48.990306    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:35:48.991238    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:48.991260    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:48.991278    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:48.991294    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.374490    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:35:49.374504    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:35:49.489812    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:35:49.489840    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:35:49.489854    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:35:49.489865    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:35:49.490699    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:35:49.490709    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:35:55.079541    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:35:55.079635    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:35:55.079652    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:35:55.103846    4951 main.go:141] libmachine: (ha-214000) DBG | 2024/10/03 20:35:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:36:23.994265    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:36:23.994281    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994427    4951 buildroot.go:166] provisioning hostname "ha-214000"
	I1003 20:36:23.994438    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:23.994568    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:23.994676    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:23.994778    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994888    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:23.994989    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:23.995134    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:23.995292    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:23.995301    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000 && echo "ha-214000" | sudo tee /etc/hostname
	I1003 20:36:24.061419    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000
	
	I1003 20:36:24.061438    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.061566    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.061665    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061761    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.061855    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.062009    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.062160    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.062171    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:36:24.123229    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:36:24.123250    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:36:24.123267    4951 buildroot.go:174] setting up certificates
	I1003 20:36:24.123274    4951 provision.go:84] configureAuth start
	I1003 20:36:24.123280    4951 main.go:141] libmachine: (ha-214000) Calling .GetMachineName
	I1003 20:36:24.123436    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:24.123534    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.123640    4951 provision.go:143] copyHostCerts
	I1003 20:36:24.123670    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123751    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:36:24.123759    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:36:24.123933    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:36:24.124159    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124208    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:36:24.124213    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:36:24.124299    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:36:24.124456    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124504    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:36:24.124508    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:36:24.124593    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:36:24.124759    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000 san=[127.0.0.1 192.169.0.5 ha-214000 localhost minikube]
	I1003 20:36:24.242470    4951 provision.go:177] copyRemoteCerts
	I1003 20:36:24.242536    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:36:24.242550    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.242680    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.242779    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.242882    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.242976    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:24.278106    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:36:24.278181    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:36:24.297749    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:36:24.297814    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1003 20:36:24.317337    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:36:24.317417    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:36:24.337360    4951 provision.go:87] duration metric: took 214.07513ms to configureAuth
	I1003 20:36:24.337374    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:36:24.337568    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:24.337582    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:24.337722    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.337811    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.337893    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.337973    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.338066    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.338199    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.338322    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.338329    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:36:24.392942    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:36:24.392953    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:36:24.393026    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:36:24.393038    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.393177    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.393275    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393375    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.393458    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.393607    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.393746    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.393789    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:36:24.457890    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:36:24.457915    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:24.458049    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:24.458145    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458223    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:24.458324    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:24.458459    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:24.458606    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:24.458617    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:36:26.102134    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:36:26.102148    4951 machine.go:96] duration metric: took 37.172864722s to provisionDockerMachine
	I1003 20:36:26.102162    4951 start.go:293] postStartSetup for "ha-214000" (driver="hyperkit")
	I1003 20:36:26.102174    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:36:26.102184    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.102399    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:36:26.102415    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.102503    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.102602    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.102703    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.102803    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.136711    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:36:26.139862    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:36:26.139874    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:36:26.139975    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:36:26.140193    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:36:26.140200    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:36:26.140451    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:36:26.147627    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:26.167774    4951 start.go:296] duration metric: took 65.6041ms for postStartSetup
	I1003 20:36:26.167794    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.167968    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:36:26.167979    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.168089    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.168182    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.168259    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.168350    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.202842    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:36:26.202914    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:36:26.255647    4951 fix.go:56] duration metric: took 37.513223093s for fixHost
	I1003 20:36:26.255670    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.255816    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.255918    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256012    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.256105    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.256247    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:26.256399    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1003 20:36:26.256406    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:36:26.311780    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012986.433392977
	
	I1003 20:36:26.311792    4951 fix.go:216] guest clock: 1728012986.433392977
	I1003 20:36:26.311797    4951 fix.go:229] Guest: 2024-10-03 20:36:26.433392977 -0700 PDT Remote: 2024-10-03 20:36:26.25566 -0700 PDT m=+37.989104353 (delta=177.732977ms)
	I1003 20:36:26.311814    4951 fix.go:200] guest clock delta is within tolerance: 177.732977ms
	I1003 20:36:26.311818    4951 start.go:83] releasing machines lock for "ha-214000", held for 37.569431066s
	I1003 20:36:26.311838    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.311964    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:26.312074    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312353    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312465    4951 main.go:141] libmachine: (ha-214000) Calling .DriverName
	I1003 20:36:26.312560    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:36:26.312588    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312635    4951 ssh_runner.go:195] Run: cat /version.json
	I1003 20:36:26.312646    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHHostname
	I1003 20:36:26.312690    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312745    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHPort
	I1003 20:36:26.312781    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312825    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHKeyPath
	I1003 20:36:26.312873    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.312925    4951 main.go:141] libmachine: (ha-214000) Calling .GetSSHUsername
	I1003 20:36:26.313009    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.313022    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000/id_rsa Username:docker}
	I1003 20:36:26.345222    4951 ssh_runner.go:195] Run: systemctl --version
	I1003 20:36:26.396121    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:36:26.401139    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:36:26.401189    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:36:26.413838    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:36:26.413851    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.413956    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.430665    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:36:26.439518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:36:26.448241    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:36:26.448295    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:36:26.457135    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.465984    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:36:26.474764    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:36:26.483576    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:36:26.492518    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:36:26.501284    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:36:26.510114    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:36:26.518992    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:36:26.527133    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:36:26.527188    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:36:26.536233    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:36:26.544367    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:26.641761    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:36:26.661796    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:36:26.661912    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:36:26.678816    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.689242    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:36:26.701530    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:36:26.713140    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.724511    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:36:26.748353    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:36:26.759647    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:36:26.774287    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:36:26.777216    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:36:26.785211    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:36:26.800364    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:36:26.895359    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:36:27.004148    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:36:27.004239    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:36:27.018268    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:27.118971    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:36:29.441016    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.322026405s)
	I1003 20:36:29.441097    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:36:29.451786    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.462092    4951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:36:29.564537    4951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:36:29.669649    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.781720    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:36:29.795175    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:36:29.806194    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:29.917885    4951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:36:29.986582    4951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:36:29.986686    4951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:36:29.991213    4951 start.go:563] Will wait 60s for crictl version
	I1003 20:36:29.991273    4951 ssh_runner.go:195] Run: which crictl
	I1003 20:36:29.994306    4951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:36:30.019989    4951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:36:30.020072    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.036824    4951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:36:30.075524    4951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:36:30.075569    4951 main.go:141] libmachine: (ha-214000) Calling .GetIP
	I1003 20:36:30.076023    4951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1003 20:36:30.080492    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.091206    4951 kubeadm.go:883] updating cluster {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:36:30.091284    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:30.091356    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.103771    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.103786    4951 docker.go:615] Images already preloaded, skipping extraction
	I1003 20:36:30.103870    4951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:36:30.126324    4951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	ghcr.io/kube-vip/kube-vip:v0.8.3
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1003 20:36:30.126343    4951 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:36:30.126351    4951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I1003 20:36:30.126423    4951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-214000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:36:30.126505    4951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:36:30.165944    4951 cni.go:84] Creating CNI manager for ""
	I1003 20:36:30.165958    4951 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1003 20:36:30.165970    4951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:36:30.165987    4951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-214000 NodeName:ha-214000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:36:30.166068    4951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-214000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:36:30.166080    4951 kube-vip.go:115] generating kube-vip config ...
	I1003 20:36:30.166149    4951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1003 20:36:30.180124    4951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1003 20:36:30.180189    4951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1003 20:36:30.180256    4951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:36:30.189222    4951 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:36:30.189287    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1003 20:36:30.198523    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1003 20:36:30.212259    4951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:36:30.225613    4951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1003 20:36:30.239086    4951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1003 20:36:30.252640    4951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1003 20:36:30.255560    4951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:36:30.265017    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:36:30.361055    4951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:36:30.373903    4951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000 for IP: 192.169.0.5
	I1003 20:36:30.373915    4951 certs.go:194] generating shared ca certs ...
	I1003 20:36:30.373925    4951 certs.go:226] acquiring lock for ca certs: {Name:mk1d63e444d4e1c96de0d297147b8d3362ff5d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.374133    4951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key
	I1003 20:36:30.374229    4951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key
	I1003 20:36:30.374245    4951 certs.go:256] generating profile certs ...
	I1003 20:36:30.374372    4951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key
	I1003 20:36:30.374395    4951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9
	I1003 20:36:30.374412    4951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1003 20:36:30.510048    4951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 ...
	I1003 20:36:30.510064    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9: {Name:mkec630c178c10067131af2c5f3c9dd0e1fb1860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510503    4951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 ...
	I1003 20:36:30.510513    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9: {Name:mk3eade5c23e406463c386755ec0dc38e869ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.510763    4951 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt
	I1003 20:36:30.511004    4951 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key.1a4a47c9 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key
	I1003 20:36:30.511276    4951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key
	I1003 20:36:30.511286    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 20:36:30.511308    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 20:36:30.511328    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 20:36:30.511347    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 20:36:30.511373    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 20:36:30.511393    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 20:36:30.511411    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 20:36:30.511428    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 20:36:30.511527    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem (1338 bytes)
	W1003 20:36:30.511580    4951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003_empty.pem, impossibly tiny 0 bytes
	I1003 20:36:30.511594    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:36:30.511627    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem (1082 bytes)
	I1003 20:36:30.511660    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:36:30.511688    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem (1679 bytes)
	I1003 20:36:30.511757    4951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:36:30.511791    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem -> /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.511811    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.511829    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.512286    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:36:30.547800    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 20:36:30.588463    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:36:30.624659    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:36:30.646082    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:36:30.665519    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 20:36:30.684966    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:36:30.704971    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:36:30.724730    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/2003.pem --> /usr/share/ca-certificates/2003.pem (1338 bytes)
	I1003 20:36:30.744135    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /usr/share/ca-certificates/20032.pem (1708 bytes)
	I1003 20:36:30.763735    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:36:30.782963    4951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:36:30.796275    4951 ssh_runner.go:195] Run: openssl version
	I1003 20:36:30.800456    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2003.pem && ln -fs /usr/share/ca-certificates/2003.pem /etc/ssl/certs/2003.pem"
	I1003 20:36:30.808784    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812168    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:07 /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.812211    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2003.pem
	I1003 20:36:30.816317    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2003.pem /etc/ssl/certs/51391683.0"
	I1003 20:36:30.824743    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20032.pem && ln -fs /usr/share/ca-certificates/20032.pem /etc/ssl/certs/20032.pem"
	I1003 20:36:30.833176    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836568    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:07 /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.836613    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20032.pem
	I1003 20:36:30.840895    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20032.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:36:30.849202    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:36:30.857643    4951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861134    4951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.861184    4951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:36:30.865411    4951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:36:30.873865    4951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:36:30.877389    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:36:30.881788    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:36:30.886088    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:36:30.890422    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:36:30.894596    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:36:30.898773    4951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:36:30.902881    4951 kubeadm.go:392] StartCluster: {Name:ha-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-214000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:36:30.902998    4951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:36:30.915213    4951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:36:30.923319    4951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:36:30.923331    4951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:36:30.923384    4951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:36:30.930635    4951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:36:30.930978    4951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-214000" does not appear in /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.931055    4951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1440/kubeconfig needs updating (will repair): [kubeconfig missing "ha-214000" cluster setting kubeconfig missing "ha-214000" context setting]
	I1003 20:36:30.931232    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.931928    4951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.932136    4951 kapi.go:59] client config for ha-214000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd994f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:36:30.932465    4951 cert_rotation.go:140] Starting client certificate rotation controller
	I1003 20:36:30.932658    4951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:36:30.939898    4951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1003 20:36:30.939909    4951 kubeadm.go:597] duration metric: took 16.574315ms to restartPrimaryControlPlane
	I1003 20:36:30.939914    4951 kubeadm.go:394] duration metric: took 37.038509ms to StartCluster
	I1003 20:36:30.939939    4951 settings.go:142] acquiring lock: {Name:mk8455cc5d7fdd5050a23a12b4fa0efeed62750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940028    4951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:36:30.940366    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1440/kubeconfig: {Name:mkbae7c951b62a8c57cfdccf12853098631befc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:30.940584    4951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:36:30.940597    4951 start.go:241] waiting for startup goroutines ...
	I1003 20:36:30.940605    4951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:36:30.940715    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:30.982685    4951 out.go:177] * Enabled addons: 
	I1003 20:36:31.003752    4951 addons.go:510] duration metric: took 63.132383ms for enable addons: enabled=[]
	I1003 20:36:31.003791    4951 start.go:246] waiting for cluster config update ...
	I1003 20:36:31.003802    4951 start.go:255] writing updated cluster config ...
	I1003 20:36:31.026641    4951 out.go:201] 
	I1003 20:36:31.047648    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:36:31.047721    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.069716    4951 out.go:177] * Starting "ha-214000-m02" control-plane node in "ha-214000" cluster
	I1003 20:36:31.111550    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:31.111584    4951 cache.go:56] Caching tarball of preloaded images
	I1003 20:36:31.111814    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1003 20:36:31.111847    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:36:31.111978    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.113032    4951 start.go:360] acquireMachinesLock for ha-214000-m02: {Name:mk8738c3956e0b6b9cba8247de2373e2c8b1e2fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:36:31.113184    4951 start.go:364] duration metric: took 124.813µs to acquireMachinesLock for "ha-214000-m02"
	I1003 20:36:31.113203    4951 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:36:31.113208    4951 fix.go:54] fixHost starting: m02
	I1003 20:36:31.113580    4951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:36:31.113606    4951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:36:31.125064    4951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I1003 20:36:31.125517    4951 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:36:31.125993    4951 main.go:141] libmachine: Using API Version  1
	I1003 20:36:31.126005    4951 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:36:31.126252    4951 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:36:31.126414    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.126604    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:36:31.126798    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.126890    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:36:31.127965    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:36:31.127999    4951 fix.go:112] recreateIfNeeded on ha-214000-m02: state=Stopped err=<nil>
	I1003 20:36:31.128009    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	W1003 20:36:31.128129    4951 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:36:31.170879    4951 out.go:177] * Restarting existing hyperkit VM for "ha-214000-m02" ...
	I1003 20:36:31.191480    4951 main.go:141] libmachine: (ha-214000-m02) Calling .Start
	I1003 20:36:31.191791    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.191820    4951 main.go:141] libmachine: (ha-214000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid
	I1003 20:36:31.191892    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Using UUID 03d82732-2b75-4dcf-994a-06b497e93635
	I1003 20:36:31.219578    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Generated MAC 8e:24:b7:e1:5:14
	I1003 20:36:31.219600    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000
	I1003 20:36:31.219761    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"03d82732-2b75-4dcf-994a-06b497e93635", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ea240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1003 20:36:31.219849    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "03d82732-2b75-4dcf-994a-06b497e93635", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machine
s/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"}
	I1003 20:36:31.219889    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 03d82732-2b75-4dcf-994a-06b497e93635 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/ha-214000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/tty,log=/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/bzimage,/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-214000"
	I1003 20:36:31.219902    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1003 20:36:31.221267    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 DEBUG: hyperkit: Pid is 4978
	I1003 20:36:31.221656    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Attempt 0
	I1003 20:36:31.221669    4951 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:36:31.221749    4951 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4978
	I1003 20:36:31.222942    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Searching for 8e:24:b7:e1:5:14 in /var/db/dhcpd_leases ...
	I1003 20:36:31.223055    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1003 20:36:31.223074    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:a:aa:e8:3c:fe:20 ID:1,a:aa:e8:3c:fe:20 Lease:0x66ff70ae}
	I1003 20:36:31.223092    4951 main.go:141] libmachine: (ha-214000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:24:b7:e1:5:14 ID:1,8e:24:b7:e1:5:14 Lease:0x66ff619f}
	I1003 20:36:31.223117    4951 main.go:141] libmachine: (ha-214000-m02) DBG | Found match: 8e:24:b7:e1:5:14
	I1003 20:36:31.223134    4951 main.go:141] libmachine: (ha-214000-m02) DBG | IP: 192.169.0.6
	I1003 20:36:31.223155    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetConfigRaw
	I1003 20:36:31.223858    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:36:31.224037    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/ha-214000/config.json ...
	I1003 20:36:31.224458    4951 machine.go:93] provisionDockerMachine start ...
	I1003 20:36:31.224468    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:36:31.224583    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:36:31.224679    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:36:31.224777    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.224929    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:36:31.225026    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:36:31.225183    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:36:31.225340    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:36:31.225347    4951 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:36:31.232364    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1003 20:36:31.241337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1003 20:36:31.242541    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.242561    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.242572    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.242585    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.630094    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1003 20:36:31.630110    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1003 20:36:31.744778    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1003 20:36:31.744796    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1003 20:36:31.744827    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1003 20:36:31.744846    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1003 20:36:31.745666    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1003 20:36:31.745681    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1003 20:36:37.337247    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1003 20:36:37.337337    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1003 20:36:37.337350    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1003 20:36:37.361028    4951 main.go:141] libmachine: (ha-214000-m02) DBG | 2024/10/03 20:36:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1003 20:37:06.292112    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:37:06.292127    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292262    4951 buildroot.go:166] provisioning hostname "ha-214000-m02"
	I1003 20:37:06.292277    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.292374    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.292454    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.292532    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292617    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.292696    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.292835    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.292968    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.292976    4951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-214000-m02 && echo "ha-214000-m02" | sudo tee /etc/hostname
	I1003 20:37:06.362584    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-214000-m02
	
	I1003 20:37:06.362599    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.362740    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.362851    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.362945    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.363048    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.363204    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.363366    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.363377    4951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-214000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-214000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-214000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:37:06.429246    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:37:06.429262    4951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1440/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1440/.minikube}
	I1003 20:37:06.429275    4951 buildroot.go:174] setting up certificates
	I1003 20:37:06.429281    4951 provision.go:84] configureAuth start
	I1003 20:37:06.429287    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetMachineName
	I1003 20:37:06.429430    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:06.429529    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.429617    4951 provision.go:143] copyHostCerts
	I1003 20:37:06.429649    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429696    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem, removing ...
	I1003 20:37:06.429701    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem
	I1003 20:37:06.429820    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/cert.pem (1123 bytes)
	I1003 20:37:06.430049    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430079    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem, removing ...
	I1003 20:37:06.430084    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem
	I1003 20:37:06.430193    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/key.pem (1679 bytes)
	I1003 20:37:06.430369    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430399    4951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem, removing ...
	I1003 20:37:06.430404    4951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem
	I1003 20:37:06.430485    4951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1440/.minikube/ca.pem (1082 bytes)
	I1003 20:37:06.430651    4951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca-key.pem org=jenkins.ha-214000-m02 san=[127.0.0.1 192.169.0.6 ha-214000-m02 localhost minikube]
	I1003 20:37:06.504641    4951 provision.go:177] copyRemoteCerts
	I1003 20:37:06.504702    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:37:06.504733    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.504884    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.504988    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.505086    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.505168    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:06.541867    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 20:37:06.541936    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 20:37:06.560930    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 20:37:06.560992    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 20:37:06.579917    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 20:37:06.579984    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:37:06.599634    4951 provision.go:87] duration metric: took 170.34603ms to configureAuth
	I1003 20:37:06.599649    4951 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:37:06.599816    4951 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:37:06.599829    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:06.599963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.600044    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.600140    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600213    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.600306    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.600434    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.600557    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.600564    4951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:37:06.660138    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:37:06.660150    4951 buildroot.go:70] root file system type: tmpfs
	I1003 20:37:06.660232    4951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:37:06.660242    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.660378    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.660498    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660607    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.660708    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.660861    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.661001    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.661049    4951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:37:06.728946    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:37:06.728963    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:06.729096    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:06.729209    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:06.729384    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:06.729544    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:06.729682    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:06.729693    4951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:37:08.289911    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:37:08.289925    4951 machine.go:96] duration metric: took 37.065461315s to provisionDockerMachine
	I1003 20:37:08.289933    4951 start.go:293] postStartSetup for "ha-214000-m02" (driver="hyperkit")
	I1003 20:37:08.289944    4951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:37:08.289954    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.290150    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:37:08.290163    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.290256    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.290347    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.290425    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.290523    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.325637    4951 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:37:08.328747    4951 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:37:08.328757    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/addons for local assets ...
	I1003 20:37:08.328838    4951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1440/.minikube/files for local assets ...
	I1003 20:37:08.328975    4951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> 20032.pem in /etc/ssl/certs
	I1003 20:37:08.328981    4951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem -> /etc/ssl/certs/20032.pem
	I1003 20:37:08.329139    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:37:08.336279    4951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/ssl/certs/20032.pem --> /etc/ssl/certs/20032.pem (1708 bytes)
	I1003 20:37:08.355765    4951 start.go:296] duration metric: took 65.822719ms for postStartSetup
	I1003 20:37:08.355783    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.355979    4951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1003 20:37:08.355992    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.356088    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.356171    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.356261    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.356337    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.391155    4951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1003 20:37:08.391224    4951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1003 20:37:08.443555    4951 fix.go:56] duration metric: took 37.330343063s for fixHost
	I1003 20:37:08.443608    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.443871    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.444091    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444300    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.444537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.444747    4951 main.go:141] libmachine: Using SSH client type: native
	I1003 20:37:08.444947    4951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc2bed00] 0xc2c19e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1003 20:37:08.444959    4951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:37:08.504053    4951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013028.627108120
	
	I1003 20:37:08.504066    4951 fix.go:216] guest clock: 1728013028.627108120
	I1003 20:37:08.504071    4951 fix.go:229] Guest: 2024-10-03 20:37:08.62710812 -0700 PDT Remote: 2024-10-03 20:37:08.443578 -0700 PDT m=+80.177024984 (delta=183.53012ms)
	I1003 20:37:08.504082    4951 fix.go:200] guest clock delta is within tolerance: 183.53012ms
	I1003 20:37:08.504087    4951 start.go:83] releasing machines lock for "ha-214000-m02", held for 37.390896714s
	I1003 20:37:08.504111    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.504258    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetIP
	I1003 20:37:08.525607    4951 out.go:177] * Found network options:
	I1003 20:37:08.567619    4951 out.go:177]   - NO_PROXY=192.169.0.5
	W1003 20:37:08.588274    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.588315    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589205    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589467    4951 main.go:141] libmachine: (ha-214000-m02) Calling .DriverName
	I1003 20:37:08.589610    4951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:37:08.589649    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	W1003 20:37:08.589687    4951 proxy.go:119] fail to check proxy env: Error ip not in block
	I1003 20:37:08.589812    4951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 20:37:08.589832    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHHostname
	I1003 20:37:08.589864    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590034    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590064    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHPort
	I1003 20:37:08.590259    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590278    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHKeyPath
	I1003 20:37:08.590517    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	I1003 20:37:08.590537    4951 main.go:141] libmachine: (ha-214000-m02) Calling .GetSSHUsername
	I1003 20:37:08.590701    4951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/ha-214000-m02/id_rsa Username:docker}
	W1003 20:37:08.623322    4951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:37:08.623398    4951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:37:08.670987    4951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:37:08.671009    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.671107    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:08.687184    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:37:08.696174    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:37:08.705216    4951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:37:08.705268    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:37:08.714371    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.723383    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:37:08.732289    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:37:08.741295    4951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:37:08.750471    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:37:08.759323    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:37:08.768482    4951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:37:08.777704    4951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:37:08.785806    4951 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 20:37:08.785866    4951 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 20:37:08.794894    4951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:37:08.803171    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:08.897940    4951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:37:08.916833    4951 start.go:495] detecting cgroup driver to use...
	I1003 20:37:08.916918    4951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:37:08.930156    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.942286    4951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:37:08.960158    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:37:08.971885    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:08.982659    4951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:37:08.999726    4951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:37:09.010351    4951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:37:09.025433    4951 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:37:09.028502    4951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:37:09.035822    4951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:37:09.049466    4951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:37:09.162468    4951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:37:09.273558    4951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:37:09.273582    4951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:37:09.288188    4951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:37:09.384897    4951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:38:10.406862    4951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021950572s)
	I1003 20:38:10.406948    4951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1003 20:38:10.444120    4951 out.go:201] 
	W1003 20:38:10.464959    4951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 04 03:37:06 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391345461Z" level=info msg="Starting up"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.391833106Z" level=info msg="containerd not running, starting managed containerd"
	Oct 04 03:37:06 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:06.395520305Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.412871636Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427882861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.427981520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428050653Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428085226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428277072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428327604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428478894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428520070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428552138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428580964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428720722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.428931280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430522141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430571354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430698188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430740032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430878079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.430929217Z" level=info msg="metadata content store policy set" policy=shared
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431351881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431440610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431485738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431519039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431551337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431619359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431825238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431902729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431941069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.431978377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432012357Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432042063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432070459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432099321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432133473Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432169855Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432202720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432268312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432315741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432351145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432383859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432414347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432447070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432476073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432510884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432548105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432578396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432608431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432640682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432669603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432698487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432729184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432768850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432801425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432829061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432911216Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432958882Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.432989050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433017196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433045319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433074497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433102613Z" level=info msg="NRI interface is disabled by configuration."
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433279017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433339149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433390358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 04 03:37:06 ha-214000-m02 dockerd[518]: time="2024-10-04T03:37:06.433425703Z" level=info msg="containerd successfully booted in 0.021412s"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.415071774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.421056219Z" level=info msg="Loading containers: start."
	Oct 04 03:37:07 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:07.500314931Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.331296883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.376605057Z" level=info msg="Loading containers: done."
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387546240Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387606581Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387647157Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.387769053Z" level=info msg="Daemon has completed initialization"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411526135Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 04 03:37:08 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:08.411682523Z" level=info msg="API listen on [::]:2376"
	Oct 04 03:37:08 ha-214000-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527035720Z" level=info msg="Processing signal 'terminated'"
	Oct 04 03:37:09 ha-214000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.527893788Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528149338Z" level=info msg="Daemon shutdown complete"
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528188105Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 04 03:37:09 ha-214000-m02 dockerd[511]: time="2024-10-04T03:37:09.528221468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 04 03:37:10 ha-214000-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 04 03:37:10 ha-214000-m02 dockerd[929]: time="2024-10-04T03:37:10.559000347Z" level=info msg="Starting up"
	Oct 04 03:38:10 ha-214000-m02 dockerd[929]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 04 03:38:10 ha-214000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1003 20:38:10.465066    4951 out.go:270] * 
	W1003 20:38:10.466299    4951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:38:10.543824    4951 out.go:201] 
	
	
	==> Docker <==
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547509520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547587217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547600394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.547679278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 cri-dockerd[1370]: time="2024-10-04T03:36:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc56c1f3c299c74527bc4bad7199ef2947f06a7fa736aaf71ff605e8aa07e0ac/resolv.conf as [nameserver 192.169.0.1]"
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613336411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613472160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613483466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.613584473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646020305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646150537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646177738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.646306268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688829574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688917158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.688931527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:36:45 ha-214000 dockerd[1122]: time="2024-10-04T03:36:45.689001023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:15 ha-214000 dockerd[1116]: time="2024-10-04T03:37:15.932971006Z" level=info msg="ignoring event" container=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933175492Z" level=info msg="shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933213613Z" level=warning msg="cleaning up after shim disconnected" id=666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842 namespace=moby
	Oct 04 03:37:15 ha-214000 dockerd[1122]: time="2024-10-04T03:37:15.933220107Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691601551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.691989127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692103682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:37:26 ha-214000 dockerd[1122]: time="2024-10-04T03:37:26.692303810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ebd8d90ba3e8f       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   f61fdecdb5ed1       storage-provisioner
	f9fb6aeea4b68       12968670680f4                                                                                         3 minutes ago       Running             kindnet-cni               1                   fc56c1f3c299c       kindnet-flq8x
	e388df4554b33       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   2401eafe0bd31       busybox-7dff88458-m7hqf
	e6ef332ed5737       c69fa2e9cbf5f                                                                                         3 minutes ago       Running             coredns                   1                   c142be8b44551       coredns-7c65d6cfc9-slrtf
	985956e1cb3da       c69fa2e9cbf5f                                                                                         3 minutes ago       Running             coredns                   1                   bf51af5037cab       coredns-7c65d6cfc9-l4wpg
	666390dc434d9       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   f61fdecdb5ed1       storage-provisioner
	e870db0c09c44       60c005f310ff3                                                                                         3 minutes ago       Running             kube-proxy                1                   69d6d030cf38a       kube-proxy-grxks
	2bccf57dd1cf7       18b729c2288dc                                                                                         3 minutes ago       Running             kube-vip                  0                   013ce7946a369       kube-vip-ha-214000
	5c0e6f76f23f0       9aa1fad941575                                                                                         3 minutes ago       Running             kube-scheduler            1                   3b875ceff5048       kube-scheduler-ha-214000
	3a34ed1393f8c       2e96e5913fc06                                                                                         3 minutes ago       Running             etcd                      1                   9863db4133f6a       etcd-ha-214000
	18a77afff888c       6bab7719df100                                                                                         3 minutes ago       Running             kube-apiserver            1                   61526ecfca3d5       kube-apiserver-ha-214000
	bf67ec881904c       175ffd71cce3d                                                                                         3 minutes ago       Running             kube-controller-manager   1                   e9d8b9ee53b05       kube-controller-manager-ha-214000
	083f8e850efee       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   26 minutes ago      Exited              busybox                   0                   241895c2dd1d7       busybox-7dff88458-m7hqf
	9d4a054cd6084       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   20614064fdfe1       coredns-7c65d6cfc9-slrtf
	8dbd76f9a11f2       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   d44e77a58bfbc       coredns-7c65d6cfc9-l4wpg
	4feedccbf99b4       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              27 minutes ago      Exited              kindnet-cni               0                   f71f3588e2173       kindnet-flq8x
	b662c2800c099       60c005f310ff3                                                                                         27 minutes ago      Exited              kube-proxy                0                   4fa1e0fd5f147       kube-proxy-grxks
	95af0d749f454       6bab7719df100                                                                                         28 minutes ago      Exited              kube-apiserver            0                   c2ec72e046987       kube-apiserver-ha-214000
	6cc4cdcf5d7aa       9aa1fad941575                                                                                         28 minutes ago      Exited              kube-scheduler            0                   c35177e1f94b2       kube-scheduler-ha-214000
	7f0249d53b2c9       175ffd71cce3d                                                                                         28 minutes ago      Exited              kube-controller-manager   0                   0b274ed688293       kube-controller-manager-ha-214000
	12454781c5122       2e96e5913fc06                                                                                         28 minutes ago      Exited              etcd                      0                   67bb6b863c2ab       etcd-ha-214000
	
	
	==> coredns [8dbd76f9a11f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38629 - 22618 "HINFO IN 5023589628377505410.1968016724755278572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385101452s
	[INFO] 10.244.0.4:34407 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046854268s
	[INFO] 10.244.0.4:40755 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.002216309s
	[INFO] 10.244.0.4:59025 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.009860632s
	[INFO] 10.244.0.4:53507 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099494s
	[INFO] 10.244.0.4:37251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012713s
	[INFO] 10.244.0.4:57152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000697366s
	[INFO] 10.244.0.4:38283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146969s
	[INFO] 10.244.0.4:37846 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061739s
	[INFO] 10.244.0.4:37855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077077s
	[INFO] 10.244.0.4:59723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015034s
	[INFO] 10.244.0.4:41660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00017872s
	[INFO] 10.244.0.4:37167 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060142s
	[INFO] 10.244.0.4:54218 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075106s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [985956e1cb3d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58650 - 4380 "HINFO IN 7121940411115309935.5046063770853036442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044429796s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[262325284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.911) (total time: 30001ms):
	Trace[262325284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[262325284]: [30.001612351s] [30.001612351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1313713214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30002ms):
	Trace[1313713214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.914)
	Trace[1313713214]: [30.00243392s] [30.00243392s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1235317752]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30003ms):
	Trace[1235317752]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.915)
	Trace[1235317752]: [30.003174126s] [30.003174126s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9d4a054cd608] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37246 - 38388 "HINFO IN 9110788561409410175.7304794748541389035. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.385203454s
	[INFO] 10.244.0.4:53531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157533s
	[INFO] 10.244.0.4:34893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169417s
	[INFO] 10.244.0.4:43265 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001086412s
	[INFO] 10.244.0.4:60377 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110746s
	[INFO] 10.244.0.4:48670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010064s
	[INFO] 10.244.0.4:45784 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073036s
	[INFO] 10.244.0.4:37875 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119228s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6ef332ed573] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60885 - 59353 "HINFO IN 8975973012052876199.2679720306794618198. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011598991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1634039844]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30003ms):
	Trace[1634039844]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:37:15.913)
	Trace[1634039844]: [30.003123911s] [30.003123911s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1181919593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.912) (total time: 30001ms):
	Trace[1181919593]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.914)
	Trace[1181919593]: [30.001966872s] [30.001966872s] END
	[INFO] plugin/kubernetes: Trace[1826819322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:36:45.910) (total time: 30001ms):
	Trace[1826819322]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:37:15.912)
	Trace[1826819322]: [30.001980832s] [30.001980832s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-214000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_12_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:39:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:36:44 +0000   Fri, 04 Oct 2024 03:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-214000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cf440f8eb534a62b20c31c760022e88
	  System UUID:                34c84010-0000-0000-ae73-2f65fb092988
	  Boot ID:                    a841ad05-f0b0-46f0-962d-fb6544f3eb77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m7hqf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-7c65d6cfc9-l4wpg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-7c65d6cfc9-slrtf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-214000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-flq8x                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-214000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-214000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-grxks                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-214000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-214000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 28m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m (x3 over 28m)      kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x3 over 28m)      kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x2 over 28m)      kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                    node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  NodeReady                27m                    kubelet          Node ha-214000 status is now: NodeReady
	  Normal  Starting                 3m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m32s (x8 over 3m32s)  kubelet          Node ha-214000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m32s (x8 over 3m32s)  kubelet          Node ha-214000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m32s (x7 over 3m32s)  kubelet          Node ha-214000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	  Normal  RegisteredNode           24s                    node-controller  Node ha-214000 event: Registered Node ha-214000 in Controller
	
	
	Name:               ha-214000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:25:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:31:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:31:28 +0000   Fri, 04 Oct 2024 03:37:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-214000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da6f3798f0cb41d9bd1591299a3b3ff1
	  System UUID:                2fdf4daa-0000-0000-9098-734a3e025506
	  Boot ID:                    4770f03b-9b5e-4604-b003-96db67ae9ee6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tvdj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kindnet-q87kq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-proxy-mhkpc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-214000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-214000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeReady                14m                kubelet          Node ha-214000-m03 status is now: NodeReady
	  Normal  RegisteredNode           3m19s              node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	  Normal  NodeNotReady             2m39s              node-controller  Node ha-214000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           24s                node-controller  Node ha-214000-m03 event: Registered Node ha-214000-m03 in Controller
	
	
	Name:               ha-214000-m04
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-214000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-214000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_03T20_39_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:39:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-214000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:40:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:40:01 +0000   Fri, 04 Oct 2024 03:39:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:40:01 +0000   Fri, 04 Oct 2024 03:39:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:40:01 +0000   Fri, 04 Oct 2024 03:39:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:40:01 +0000   Fri, 04 Oct 2024 03:39:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-214000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 74e90d24604f4309936c08b342fe3bb8
	  System UUID:                7b814f14-0000-0000-bee8-97f30534dce1
	  Boot ID:                    55c0a0f6-9d6d-4ccc-b298-66977105d627
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z5g4l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 etcd-ha-214000-m04                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29s
	  kube-system                 kindnet-lmxhp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      31s
	  kube-system                 kube-apiserver-ha-214000-m04             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-ha-214000-m04    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-t2c5z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-ha-214000-m04             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-vip-ha-214000-m04                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s (x8 over 32s)  kubelet          Node ha-214000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s (x8 over 32s)  kubelet          Node ha-214000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s (x7 over 32s)  kubelet          Node ha-214000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node ha-214000-m04 event: Registered Node ha-214000-m04 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node ha-214000-m04 event: Registered Node ha-214000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036162] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007697] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.692433] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006843] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.638785] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.210765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 4 03:36] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.104887] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +1.918276] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.255425] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +0.098027] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.126300] systemd-fstab-generator[1107]: Ignoring "noauto" option for root device
	[  +2.450359] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.101909] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.051667] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.054007] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +0.136592] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.448600] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +6.747516] kauditd_printk_skb: 88 callbacks suppressed
	[  +7.915746] kauditd_printk_skb: 40 callbacks suppressed
	[Oct 4 03:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [12454781c512] <==
	{"level":"info","ts":"2024-10-04T03:12:00.492656Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:12:09.373510Z","caller":"traceutil/trace.go:171","msg":"trace[1722252977] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"166.584924ms","start":"2024-10-04T03:12:09.206906Z","end":"2024-10-04T03:12:09.373491Z","steps":["trace[1722252977] 'process raft request'  (duration: 92.146899ms)","trace[1722252977] 'compare'  (duration: 74.234034ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:12:09.373569Z","caller":"traceutil/trace.go:171","msg":"trace[1020568327] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"128.128043ms","start":"2024-10-04T03:12:09.245431Z","end":"2024-10-04T03:12:09.373559Z","steps":["trace[1020568327] 'read index received'  (duration: 53.625787ms)","trace[1020568327] 'applied index is now lower than readState.Index'  (duration: 74.501734ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T03:12:09.374402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.848725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T03:12:09.374494Z","caller":"traceutil/trace.go:171","msg":"trace[1461441424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:339; }","duration":"129.057211ms","start":"2024-10-04T03:12:09.245429Z","end":"2024-10-04T03:12:09.374486Z","steps":["trace[1461441424] 'agreement among raft nodes before linearized reading'  (duration: 128.177252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.374819Z","caller":"traceutil/trace.go:171","msg":"trace[1574543472] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"166.39202ms","start":"2024-10-04T03:12:09.208420Z","end":"2024-10-04T03:12:09.374812Z","steps":["trace[1574543472] 'process raft request'  (duration: 165.001009ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375121Z","caller":"traceutil/trace.go:171","msg":"trace[756140606] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"130.248642ms","start":"2024-10-04T03:12:09.244862Z","end":"2024-10-04T03:12:09.375110Z","steps":["trace[756140606] 'process raft request'  (duration: 128.640419ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:12:09.375566Z","caller":"traceutil/trace.go:171","msg":"trace[682275388] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"128.844293ms","start":"2024-10-04T03:12:09.246715Z","end":"2024-10-04T03:12:09.375559Z","steps":["trace[682275388] 'process raft request'  (duration: 126.812002ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:22:00.620113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2024-10-04T03:22:00.627625Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":973,"took":"6.873284ms","hash":261314489,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2449408,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-04T03:22:00.627665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":261314489,"revision":973,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:27:00.215476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-04T03:27:00.217042Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"1.236946ms","hash":1433174615,"current-db-size-bytes":2449408,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2023424,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-04T03:27:00.217099Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1433174615,"revision":1514,"compact-revision":973}
	{"level":"info","ts":"2024-10-04T03:31:28.060845Z","caller":"traceutil/trace.go:171","msg":"trace[860112081] transaction","detail":"{read_only:false; response_revision:2637; number_of_response:1; }","duration":"112.489562ms","start":"2024-10-04T03:31:27.948335Z","end":"2024-10-04T03:31:28.060825Z","steps":["trace[860112081] 'process raft request'  (duration: 91.094323ms)","trace[860112081] 'compare'  (duration: 21.269614ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T03:31:44.553900Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T03:31:44.553958Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	{"level":"warn","ts":"2024-10-04T03:31:44.554007Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554028Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.554108Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:31:44.562422Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T03:31:44.579712Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-10-04T03:31:44.581173Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581242Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-10-04T03:31:44.581251Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-214000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [3a34ed1393f8] <==
	{"level":"info","ts":"2024-10-04T03:36:39.492642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:36:39.493003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-10-04T03:39:31.478440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860) learners=(6196648108447213699)"}
	{"level":"info","ts":"2024-10-04T03:39:31.478753Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"55feea9f9617a883","added-peer-peer-urls":["https://192.169.0.8:2380"]}
	{"level":"info","ts":"2024-10-04T03:39:31.479001Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.479121Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.479669Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.479836Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883","remote-peer-urls":["https://192.169.0.8:2380"]}
	{"level":"info","ts":"2024-10-04T03:39:31.479972Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480272Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480395Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480489Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:31.480839Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"warn","ts":"2024-10-04T03:39:31.535942Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.8:2380/version","remote-member-id":"55feea9f9617a883","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-04T03:39:31.536046Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"55feea9f9617a883","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"info","ts":"2024-10-04T03:39:32.311652Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.312234Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.312583Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.323139Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"55feea9f9617a883","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-10-04T03:39:32.323311Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:32.333366Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"55feea9f9617a883","stream-type":"stream Message"}
	{"level":"info","ts":"2024-10-04T03:39:32.333407Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"55feea9f9617a883"}
	{"level":"info","ts":"2024-10-04T03:39:33.013374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(6196648108447213699 13314548521573537860)"}
	{"level":"info","ts":"2024-10-04T03:39:33.013685Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-10-04T03:39:33.013729Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"55feea9f9617a883"}
	
	
	==> kernel <==
	 03:40:02 up 4 min,  0 users,  load average: 0.14, 0.16, 0.08
	Linux ha-214000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4feedccbf99b] <==
	I1004 03:30:43.497755       1 main.go:299] handling current node
	I1004 03:30:53.496402       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:30:53.496595       1 main.go:299] handling current node
	I1004 03:30:53.496647       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:30:53.496795       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:03.496468       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:03.496619       1 main.go:299] handling current node
	I1004 03:31:03.496645       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:03.496656       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:13.497200       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:13.497236       1 main.go:299] handling current node
	I1004 03:31:13.497252       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:13.497259       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:23.497508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:23.497727       1 main.go:299] handling current node
	I1004 03:31:23.497777       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:23.497873       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:33.499104       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:33.499148       1 main.go:299] handling current node
	I1004 03:31:33.499160       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:33.499165       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:31:43.499561       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:31:43.499582       1 main.go:299] handling current node
	I1004 03:31:43.499592       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:31:43.499596       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f9fb6aeea4b6] <==
	I1004 03:39:17.017865       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:17.017946       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:27.017939       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:27.017995       1 main.go:299] handling current node
	I1004 03:39:27.018014       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:27.018027       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:37.016464       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:37.016506       1 main.go:299] handling current node
	I1004 03:39:37.016518       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:37.016523       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:37.016805       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I1004 03:39:37.016835       1 main.go:322] Node ha-214000-m04 has CIDR [10.244.2.0/24] 
	I1004 03:39:37.016917       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0} 
	I1004 03:39:47.017663       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:47.017736       1 main.go:299] handling current node
	I1004 03:39:47.017812       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:47.017826       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:47.018266       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I1004 03:39:47.018288       1 main.go:322] Node ha-214000-m04 has CIDR [10.244.2.0/24] 
	I1004 03:39:57.014884       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I1004 03:39:57.014922       1 main.go:299] handling current node
	I1004 03:39:57.014933       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I1004 03:39:57.014940       1 main.go:322] Node ha-214000-m03 has CIDR [10.244.1.0/24] 
	I1004 03:39:57.015044       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I1004 03:39:57.015071       1 main.go:322] Node ha-214000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [18a77afff888] <==
	I1004 03:36:40.325058       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1004 03:36:40.325213       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 03:36:40.334937       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1004 03:36:40.335017       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1004 03:36:40.364395       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 03:36:40.364604       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:36:40.364759       1 policy_source.go:224] refreshing policies
	I1004 03:36:40.374385       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:36:40.418647       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 03:36:40.423571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 03:36:40.423778       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 03:36:40.423914       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:36:40.424647       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 03:36:40.424699       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 03:36:40.425567       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 03:36:40.435139       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:36:40.435487       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:36:40.435554       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:36:40.435596       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:36:40.435678       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:36:40.437733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:36:41.323990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1004 03:36:41.538108       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1004 03:36:41.539664       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:36:41.543233       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [95af0d749f45] <==
	W1004 03:31:45.565832       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.564688       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565697       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.563880       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.565578       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571249       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571472       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571633       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.571818       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572042       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572188       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572478       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572615       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572727       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572882       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572220       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572633       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572900       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572324       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572348       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572056       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572740       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572444       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.573046       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 03:31:45.572405       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7f0249d53b2c] <==
	I1004 03:13:35.065074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.799µs"
	I1004 03:13:38.001431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:18:43.448664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:23:48.384517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:25:21.196220       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-214000-m03\" does not exist"
	I1004 03:25:21.209944       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-214000-m03" podCIDRs=["10.244.1.0/24"]
	I1004 03:25:21.210281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.210429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.263775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:21.544452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:23.229208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m03"
	I1004 03:25:23.321462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:31.344709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.663136       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-214000-m03"
	I1004 03:25:50.663715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.672763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:50.678116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.017µs"
	I1004 03:25:50.692886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.244µs"
	I1004 03:25:50.698460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.01µs"
	I1004 03:25:53.295152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:25:57.506654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.151707ms"
	I1004 03:25:57.507147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.862µs"
	I1004 03:26:22.202705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:28:54.315206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000"
	I1004 03:31:28.798824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	
	
	==> kube-controller-manager [bf67ec881904] <==
	I1004 03:39:31.158171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:31.158188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:31.170984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:31.444061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.099140       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.689288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.768707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:33.771502       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-214000-m04"
	I1004 03:39:33.862969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:35.375953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.231µs"
	I1004 03:39:38.447222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:39:38.458713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:38.516627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m03"
	I1004 03:39:41.353245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:43.944945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:48.603231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:51.531963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:51.539153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:51.544111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.605µs"
	I1004 03:39:51.554478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.021µs"
	I1004 03:39:51.560860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.67µs"
	I1004 03:39:53.474437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	I1004 03:39:58.478434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.285042ms"
	I1004 03:39:58.478539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.202µs"
	I1004 03:40:01.574106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-214000-m04"
	
	
	==> kube-proxy [b662c2800c09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:12:10.192204       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:12:10.202009       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:12:10.202096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:12:10.231021       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:12:10.231076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:12:10.231095       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:12:10.233365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:12:10.233817       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:12:10.233898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:12:10.234699       1 config.go:199] "Starting service config controller"
	I1004 03:12:10.234942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:12:10.235010       1 config.go:328] "Starting node config controller"
	I1004 03:12:10.235115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:12:10.235175       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:12:10.237771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:12:10.238085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:12:10.335745       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:12:10.336135       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e870db0c09c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:36:45.742770       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:36:45.775231       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1004 03:36:45.775291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:36:45.922303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:36:45.922329       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:36:45.922347       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:36:45.927222       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:36:45.928115       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:36:45.928127       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:45.937610       1 config.go:199] "Starting service config controller"
	I1004 03:36:45.937639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:36:45.937654       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:36:45.937658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:36:45.937932       1 config.go:328] "Starting node config controller"
	I1004 03:36:45.937937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:36:46.038944       1 shared_informer.go:320] Caches are synced for node config
	I1004 03:36:46.039004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:36:46.051315       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5c0e6f76f23f] <==
	I1004 03:36:38.366946       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:36:40.340041       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:36:40.340076       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:36:40.340085       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:36:40.340089       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:36:40.388605       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:36:40.388643       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:36:40.391116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:36:40.391386       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:36:40.391458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:36:40.391415       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:36:40.493018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6cc4cdcf5d7a] <==
	W1004 03:12:01.800920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:12:01.801714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:01.801949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.800977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:01.802284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:01.802455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:01.801044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:12:01.802914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.683842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:12:02.683889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.807418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.807531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.843343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:12:02.843568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.854065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:12:02.854106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:12:02.893868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:12:02.893912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 03:12:03.479664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:31:44.485971       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1004 03:31:44.486818       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1004 03:31:44.487813       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1004 03:31:44.490023       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.651585    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-xtables-lock\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652013    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc2900bb-0bba-4e55-a4dd-1bc9fca40611-xtables-lock\") pod \"kindnet-flq8x\" (UID: \"bc2900bb-0bba-4e55-a4dd-1bc9fca40611\") " pod="kube-system/kindnet-flq8x"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.652345    1532 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/081b3b91-47cc-4e37-a6b8-4de271f93c97-lib-modules\") pod \"kube-proxy-grxks\" (UID: \"081b3b91-47cc-4e37-a6b8-4de271f93c97\") " pod="kube-system/kube-proxy-grxks"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.667905    1532 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 04 03:36:44 ha-214000 kubelet[1532]: I1004 03:36:44.744827    1532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-214000" podStartSLOduration=0.744814896 podStartE2EDuration="744.814896ms" podCreationTimestamp="2024-10-04 03:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 03:36:44.735075165 +0000 UTC m=+14.226297815" watchObservedRunningTime="2024-10-04 03:36:44.744814896 +0000 UTC m=+14.236037540"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296434    1532 scope.go:117] "RemoveContainer" containerID="792bd20fa10c95874d8ad89fc2ecf38b64e23df2d19d9b348cf3e9c46121c1b2"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: I1004 03:37:16.296668    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:16 ha-214000 kubelet[1532]: E1004 03:37:16.296799    1532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f5e9cfaf-fc93-45bd-9061-cf51f9eef735)\"" pod="kube-system/storage-provisioner" podUID="f5e9cfaf-fc93-45bd-9061-cf51f9eef735"
	Oct 04 03:37:26 ha-214000 kubelet[1532]: I1004 03:37:26.640771    1532 scope.go:117] "RemoveContainer" containerID="666390dc434d98eb4e81cd5f088d9f32932d7e0590349ecd22653decb7a54842"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: I1004 03:37:30.670798    1532 scope.go:117] "RemoveContainer" containerID="2e5127305b39f8d6e99e701a21860eb86b129da510647193574f5beeb8153b48"
	Oct 04 03:37:30 ha-214000 kubelet[1532]: E1004 03:37:30.694461    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:37:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:37:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:38:30 ha-214000 kubelet[1532]: E1004 03:38:30.681723    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:38:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:38:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:38:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:38:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:39:30 ha-214000 kubelet[1532]: E1004 03:39:30.681080    1532 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:39:30 ha-214000 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:39:30 ha-214000 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:39:30 ha-214000 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:39:30 ha-214000 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-214000 -n ha-214000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-214000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (137.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-490000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E1003 20:45:10.877853    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-490000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m17.591775621s)

                                                
                                                
-- stdout --
	* [mount-start-1-490000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-490000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-490000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:7d:84:5:74:38
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-490000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:7b:a6:d8:73:0
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:7b:a6:d8:73:0
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-490000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-490000 -n mount-start-1-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-490000 -n mount-start-1-490000: exit status 7 (89.657878ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:46:34.453367    5422 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 20:46:34.453388    5422 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-490000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (137.68s)

                                                
                                    
x
+
TestScheduledStopUnix (141.97s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-934000 --memory=2048 --driver=hyperkit 
E1003 21:00:10.915672    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-934000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.633721473s)

                                                
                                                
-- stdout --
	* [scheduled-stop-934000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-934000" primary control-plane node in "scheduled-stop-934000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-934000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:51:aa:1a:18:da
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-934000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:26:a2:14:60:ba
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:26:a2:14:60:ba
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-934000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-934000" primary control-plane node in "scheduled-stop-934000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-934000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:51:aa:1a:18:da
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-934000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:26:a2:14:60:ba
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:26:a2:14:60:ba
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-03 21:02:15.730481 -0700 PDT m=+4476.863448036
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-934000 -n scheduled-stop-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-934000 -n scheduled-stop-934000: exit status 7 (90.068972ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:02:15.818423    6548 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:02:15.818445    6548 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-934000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-934000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-934000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-934000: (5.247831664s)
--- FAIL: TestScheduledStopUnix (141.97s)

                                                
                                    
x
+
TestPause/serial/Start (139.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-106000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-106000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m19.157732186s)

                                                
                                                
-- stdout --
	* [pause-106000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-106000" primary control-plane node in "pause-106000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-106000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:1d:41:7c:24:b2
	* Failed to start hyperkit VM. Running "minikube delete -p pause-106000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:a1:45:5d:b8:61
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:a1:45:5d:b8:61
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-106000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-106000 -n pause-106000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-106000 -n pause-106000: exit status 7 (91.572612ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 21:44:47.969243    9048 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1003 21:44:47.969269    9048 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-106000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (139.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (7201.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-320000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g79br" [5c2dc497-9e85-468a-a18e-9366a7231463] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (43m12s)
		TestNetworkPlugins/group/flannel (1m9s)
		TestNetworkPlugins/group/flannel/NetCatPod (8s)
		TestNetworkPlugins/group/kindnet (45s)
		TestNetworkPlugins/group/kindnet/Start (45s)
		TestStartStop (3m24s)

                                                
                                                
goroutine 3400 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000ab4340, 0xc00095fbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc0000100a8, {0xf5b4d00, 0x2a, 0x2a}, {0xb3ff4d6?, 0xffffffffffffffff?, 0xf5d7900?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00071bcc0)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00071bcc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000983700)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 164 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3193 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001d73ba0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001d73ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001d73ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001d73ba0, 0xc000992440)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3187
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3379 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xe209260, 0xc0000681c0}, 0xc00096bf50, 0xc00096bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xe209260, 0xc0000681c0}, 0x80?, 0xc00096bf50, 0xc00096bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xe209260?, 0xc0000681c0?}, 0xc001d72d00?, 0xb53caa0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00096bfd0?, 0xb57b764?, 0xc0008a7200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3368
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1880 [select, 96 minutes]:
net/http.(*persistConn).writeLoop(0xc001384000)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1869
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 3245 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3244
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3188 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001d72820)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001d72820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001d72820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001d72820, 0xc0009922c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3187
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2755 [chan receive, 43 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000281040)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000281040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000281040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000281040, 0xc001348880)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 162 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000aac9d0, 0x2b)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00086fd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xe221f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000aaca00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000482000, {0xe1e4fa0, 0xc000a7e030}, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000482000, 0x3b9aca00, 0x0, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 163 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xe209260, 0xc0000681c0}, 0xc000874f50, 0xc000874f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xe209260, 0xc0000681c0}, 0x6c?, 0xc000874f50, 0xc000874f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xe209260?, 0xc0000681c0?}, 0x61627261742d656d?, 0x2f3831762f736c6c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x302e30322e31762d?, 0x2d72656b636f642d?, 0x3279616c7265766f?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 155 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xe1ffb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 156 [chan receive, 114 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000aaca00, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1441 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0004273d0, 0x28)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014f3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xe221f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000427400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000706b10, {0xe1e4fa0, 0xc001d475c0}, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000706b10, 0x3b9aca00, 0x0, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1420
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2700 [chan receive, 45 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000427140, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2679
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2756 [chan receive, 43 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0002811e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002811e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0002811e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0002811e0, 0xc001348900)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2734 [chan receive, 43 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000280340)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000280340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000280340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000280340, 0xc001348600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3189 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001d72ea0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001d72ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001d72ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001d72ea0, 0xc000992300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3187
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3187 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc001d72680, 0xe1d75b8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2704
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1520 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014ba900, 0xc000069c00)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1519
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3191 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001d73860)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001d73860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001d73860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001d73860, 0xc000992380)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3187
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2699 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xe1ffb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2679
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3192 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001d73a00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001d73a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001d73a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001d73a00, 0xc0009923c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3187
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3342 [syscall, 2 minutes]:
syscall.syscall6(0xff75a68?, 0x90?, 0xc0014f5d18?, 0xff6c108?, 0x90?, 0x100000b404fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc0014f5cd8?, 0xb400ac5?, 0x90?, 0xe154660?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc00062a380?, 0xc0014f5d0c, 0xc001bec0c0?, 0xc0012f5240?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc0004c1ec0)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0xb44b1b9?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0015fd380)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0015fd380)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc000ab4ea0, 0xc0015fd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000ab4ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000ab4ea0, 0xc00147c750)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2754
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2646 [chan receive, 43 minutes]:
testing.(*T).Run(0xc0014f6820, {0xcf0161a?, 0x4b5d76aeae3?}, 0xc00146c738)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0014f6820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0014f6820, 0xe1d73f8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1208 [IO wait, 101 minutes]:
internal/poll.runtime_pollWait(0x56f13d80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000412a00?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000412a00)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc000412a00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001d5a180)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001d5a180)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc00091e4b0, {0xe1fc980, 0xc001d5a180})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc00091e4b0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc001d724e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1205
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 1762 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc001cb7c80, 0xc001cc59d0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1761
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 1790 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc0007e4d80, 0xc001cc5260)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1789
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2736 [chan receive, 43 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000280820)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000280820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000280820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000280820, 0xc001348700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2709 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000427110, 0x1a)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001396d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xe221f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000427140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008214f0, {0xe1e4fa0, 0xc001416540}, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008214f0, 0x3b9aca00, 0x0, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2700
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2710 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xe209260, 0xc0000681c0}, 0xc00048df50, 0xc001376f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xe209260, 0xc0000681c0}, 0xd0?, 0xc00048df50, 0xc00048df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xe209260?, 0xc0000681c0?}, 0xc0014f6000?, 0xb53caa0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xb57b705?, 0xc0007e4480?, 0xc00145abd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2700
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2704 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0014f64e0, {0xcf0161a?, 0xb53c193?}, 0xe1d75b8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0014f64e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0014f64e0, 0xe1d7440)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2711 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2710
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1420 [chan receive, 97 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000427400, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1341
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2735 [chan receive, 43 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0002804e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002804e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0002804e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0002804e0, 0xc001348680)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1419 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xe1ffb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1341
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2757 [chan receive, 43 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000281380)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000281380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000281380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000281380, 0xc001348980)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1442 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xe209260, 0xc0000681c0}, 0xc001de5750, 0xc001306f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xe209260, 0xc0000681c0}, 0x0?, 0xc001de5750, 0xc001de5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xe209260?, 0xc0000681c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xba5f285?, 0xc001350ea0?, 0xe1ffb40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1420
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1443 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1442
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2753 [chan receive]:
testing.(*T).Run(0xc000280b60, {0xcf09674?, 0xdb6ca40?}, 0xc00147c960)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000280b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x86b
testing.tRunner(0xc000280b60, 0xc001348780)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3344 [IO wait]:
internal/poll.runtime_pollWait(0x56f13c78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001350360?, 0xc0015193c5?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001350360, {0xc0015193c5, 0x1ec3b, 0x1ec3b})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001d68e40, {0xc0015193c5?, 0xc00096b550?, 0x1fe94?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00147c840, {0xe1e3858, 0xc000a8aa48})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xe1e39e0, 0xc00147c840}, {0xe1e3858, 0xc000a8aa48}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00096b678?, {0xe1e39e0, 0xc00147c840})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xf58a750?, {0xe1e39e0?, 0xc00147c840?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0xe1e39e0, 0xc00147c840}, {0xe1e3940, 0xc001d68e40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0012e3730?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3342
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3190 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc0005a2640)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001d73040)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001d73040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001d73040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001d73040, 0xc000992340)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3187
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3392 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0xe209058, 0xc0007009a0}, {0xe1fcfe0, 0xc0008a8e60}, 0x1, 0x0, 0xc001373be0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0xe209058?, 0xc000510cb0?}, 0x3b9aca00, 0xc001319dd8?, 0x1, 0xc001319be0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0xe209058, 0xc000510cb0}, 0xc000ab4820, {0xc0013c2a30, 0xe}, {0xcf04fab, 0x7}, {0xcf0b1b8, 0xa}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc000ab4820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc000ab4820, 0xc00147c960)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2753
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2754 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000280ea0, {0xcf0161f?, 0xdb6ca40?}, 0xc00147c750)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000280ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc000280ea0, 0xc001348800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2732
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1800 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc001e10600, 0xc001d54930)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1312
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3257 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xe1ffb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3253
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2732 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000280000, 0xc00146c738)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2646
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1879 [select, 96 minutes]:
net/http.(*persistConn).readLoop(0xc001384000)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1869
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3244 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xe209260, 0xc0000681c0}, 0xc001de0f50, 0xc001de0f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xe209260, 0xc0000681c0}, 0x80?, 0xc001de0f50, 0xc001de0f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xe209260?, 0xc0000681c0?}, 0xc000281040?, 0xb53caa0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001de0fd0?, 0xb57b764?, 0xc001b27180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3258
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3378 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000aaded0, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001de4580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xe221f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000aadf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012f52c0, {0xe1e4fa0, 0xc00147c8a0}, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012f52c0, 0x3b9aca00, 0x0, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3368
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3243 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008a7290, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000486d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xe221f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008a72c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005b6be0, {0xe1e4fa0, 0xc0013746f0}, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005b6be0, 0x3b9aca00, 0x0, 0x1, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3258
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3258 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008a72c0, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3253
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3377 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015fd380, 0xc0012e3c00)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3342
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3343 [IO wait]:
internal/poll.runtime_pollWait(0x56f13438, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001350240?, 0xc0017feb96?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001350240, {0xc0017feb96, 0x46a, 0x46a})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001d68e28, {0xc0017feb96?, 0x19?, 0x226?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00147c810, {0xe1e3858, 0xc000a8aa40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xe1e39e0, 0xc00147c810}, {0xe1e3858, 0xc000a8aa40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0xe1e39e0, 0xc00147c810})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xf58a750?, {0xe1e39e0?, 0xc00147c810?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0xe1e39e0, 0xc00147c810}, {0xe1e3940, 0xc001d68e28}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc00147c750?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3342
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3367 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xe1ffb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3366
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3384 [IO wait]:
internal/poll.runtime_pollWait(0x56f13330, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000412200?, 0xc00195f000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000412200, {0xc00195f000, 0x3000, 0x3000})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000412200, {0xc00195f000?, 0x10?, 0xc0013988a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001d68e80, {0xc00195f000?, 0xc00195f005?, 0x1a?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001bed0e0, {0xc00195f000?, 0x0?, 0xc001bed0e0?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0004c77b8, {0xe1e5580, 0xc001bed0e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0004c7508, {0x56f5e480, 0xc001bec2e8}, 0xc001398a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0004c7508, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0004c7508, {0xc001a58000, 0x1000, 0xc001ae2700?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0008fe7e0, {0xc0014dc3c0, 0x9, 0xf586330?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0xe1e3a80, 0xc0008fe7e0}, {0xc0014dc3c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0014dc3c0, 0x9, 0xb468c65?}, {0xe1e3a80?, 0xc0008fe7e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0014dc380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001398fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0015fd500)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3383
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3380 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3379
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3368 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000aadf00, 0xc0000681c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3366
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                    

Test pass (174/220)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.09
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.25
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.31.1/json-events 18.92
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.3
18 TestDownloadOnly/v1.31.1/DeleteAll 0.25
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.23
21 TestBinaryMirror 0.99
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 261.9
29 TestAddons/serial/Volcano 40.13
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Registry 18.92
35 TestAddons/parallel/Ingress 19.49
36 TestAddons/parallel/InspektorGadget 10.49
37 TestAddons/parallel/Logviewer 6.39
38 TestAddons/parallel/MetricsServer 6.5
40 TestAddons/parallel/CSI 59.13
41 TestAddons/parallel/Headlamp 17.46
42 TestAddons/parallel/CloudSpanner 6.38
43 TestAddons/parallel/LocalPath 52.55
44 TestAddons/parallel/NvidiaDevicePlugin 5.37
45 TestAddons/parallel/Yakd 10.47
46 TestAddons/StoppedEnableDisable 5.96
54 TestHyperKitDriverInstallOrUpdate 8.32
57 TestErrorSpam/setup 36.61
58 TestErrorSpam/start 1.65
59 TestErrorSpam/status 0.52
60 TestErrorSpam/pause 1.35
61 TestErrorSpam/unpause 1.39
62 TestErrorSpam/stop 153.83
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 48.1
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 38.45
69 TestFunctional/serial/KubeContext 0.05
70 TestFunctional/serial/KubectlGetPods 0.07
73 TestFunctional/serial/CacheCmd/cache/add_remote 9.74
74 TestFunctional/serial/CacheCmd/cache/add_local 1.4
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
76 TestFunctional/serial/CacheCmd/cache/list 0.09
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
78 TestFunctional/serial/CacheCmd/cache/cache_reload 2.57
79 TestFunctional/serial/CacheCmd/cache/delete 0.17
80 TestFunctional/serial/MinikubeKubectlCmd 1.2
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.55
82 TestFunctional/serial/ExtraConfig 64.52
83 TestFunctional/serial/ComponentHealth 0.06
84 TestFunctional/serial/LogsCmd 2.66
85 TestFunctional/serial/LogsFileCmd 2.76
86 TestFunctional/serial/InvalidService 4.1
88 TestFunctional/parallel/ConfigCmd 0.52
89 TestFunctional/parallel/DashboardCmd 13.95
90 TestFunctional/parallel/DryRun 1.03
91 TestFunctional/parallel/InternationalLanguage 0.51
92 TestFunctional/parallel/StatusCmd 0.53
96 TestFunctional/parallel/ServiceCmdConnect 15.39
97 TestFunctional/parallel/AddonsCmd 0.24
98 TestFunctional/parallel/PersistentVolumeClaim 29.45
100 TestFunctional/parallel/SSHCmd 0.32
101 TestFunctional/parallel/CpCmd 1.07
102 TestFunctional/parallel/MySQL 24.82
103 TestFunctional/parallel/FileSync 0.2
104 TestFunctional/parallel/CertSync 1.16
108 TestFunctional/parallel/NodeLabels 0.07
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
112 TestFunctional/parallel/License 1.43
113 TestFunctional/parallel/Version/short 0.11
114 TestFunctional/parallel/Version/components 0.42
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.09
120 TestFunctional/parallel/ImageCommands/Setup 1.72
121 TestFunctional/parallel/DockerEnv/bash 0.62
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.87
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.73
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.34
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.16
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.05
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
143 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
144 TestFunctional/parallel/ServiceCmd/List 0.78
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.78
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
147 TestFunctional/parallel/ServiceCmd/Format 0.45
148 TestFunctional/parallel/ServiceCmd/URL 0.45
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
150 TestFunctional/parallel/ProfileCmd/profile_list 0.32
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
152 TestFunctional/parallel/MountCmd/any-port 10.88
153 TestFunctional/parallel/MountCmd/specific-port 1.53
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/NodeLabels 0.06
175 TestMultiControlPlane/serial/StopCluster 158.64
182 TestImageBuild/serial/Setup 37.81
183 TestImageBuild/serial/NormalBuild 4.63
184 TestImageBuild/serial/BuildWithBuildArg 1.09
185 TestImageBuild/serial/BuildWithDockerIgnore 0.9
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.9
190 TestJSONOutput/start/Command 81.08
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.48
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.46
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 8.34
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.61
218 TestMainNoArgs 0.09
219 TestMinikubeProfile 90.83
225 TestMultiNode/serial/FreshStart2Nodes 118.63
226 TestMultiNode/serial/DeployApp2Nodes 8.93
227 TestMultiNode/serial/PingHostFrom2Pods 0.93
228 TestMultiNode/serial/AddNode 55.59
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.38
231 TestMultiNode/serial/CopyFile 5.68
232 TestMultiNode/serial/StopNode 2.9
233 TestMultiNode/serial/StartAfterStop 41.6
234 TestMultiNode/serial/RestartKeepsNodes 179.7
235 TestMultiNode/serial/DeleteNode 3.34
236 TestMultiNode/serial/StopMultiNode 16.8
237 TestMultiNode/serial/RestartMultiNode 122.58
238 TestMultiNode/serial/ValidateNameConflict 41.09
242 TestPreload 190.53
245 TestSkaffold 126.19
248 TestRunningBinaryUpgrade 120.74
250 TestKubernetesUpgrade 1398.7
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.1
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.93
265 TestStoppedBinaryUpgrade/Setup 5.04
266 TestStoppedBinaryUpgrade/Upgrade 151.22
269 TestStoppedBinaryUpgrade/MinikubeLogs 2.56
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.48
279 TestNoKubernetes/serial/StartWithK8s 70.32
281 TestNoKubernetes/serial/StartWithStopK8s 18.64
282 TestNoKubernetes/serial/Start 19.71
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.14
284 TestNoKubernetes/serial/ProfileList 0.63
285 TestNoKubernetes/serial/Stop 2.43
286 TestNoKubernetes/serial/StartNoArgs 20.1
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (39.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-524000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-524000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (39.085207204s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (39.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1003 19:48:17.897508    2003 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1003 19:48:17.897705    2003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-524000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-524000: exit status 85 (298.956209ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-524000 | jenkins | v1.34.0 | 03 Oct 24 19:47 PDT |          |
	|         | -p download-only-524000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 19:47:38
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:47:38.869607    2004 out.go:345] Setting OutFile to fd 1 ...
	I1003 19:47:38.870377    2004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:47:38.870386    2004 out.go:358] Setting ErrFile to fd 2...
	I1003 19:47:38.870392    2004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:47:38.870979    2004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	W1003 19:47:38.871098    2004 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19546-1440/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19546-1440/.minikube/config/config.json: no such file or directory
	I1003 19:47:38.873105    2004 out.go:352] Setting JSON to true
	I1003 19:47:38.901806    2004 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1028,"bootTime":1728009030,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 19:47:38.901971    2004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 19:47:38.925019    2004 out.go:97] [download-only-524000] minikube v1.34.0 on Darwin 15.0.1
	W1003 19:47:38.925201    2004 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 19:47:38.925202    2004 notify.go:220] Checking for updates...
	I1003 19:47:38.946066    2004 out.go:169] MINIKUBE_LOCATION=19546
	I1003 19:47:38.967112    2004 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 19:47:38.988968    2004 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 19:47:39.010112    2004 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:47:39.031007    2004 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	W1003 19:47:39.075070    2004 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 19:47:39.075592    2004 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 19:47:39.133082    2004 out.go:97] Using the hyperkit driver based on user configuration
	I1003 19:47:39.133134    2004 start.go:297] selected driver: hyperkit
	I1003 19:47:39.133148    2004 start.go:901] validating driver "hyperkit" against <nil>
	I1003 19:47:39.133340    2004 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:47:39.133800    2004 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 19:47:39.539608    2004 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 19:47:39.546937    2004 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 19:47:39.546963    2004 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 19:47:39.547004    2004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 19:47:39.553881    2004 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1003 19:47:39.554070    2004 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:47:39.554102    2004 cni.go:84] Creating CNI manager for ""
	I1003 19:47:39.554148    2004 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 19:47:39.554223    2004 start.go:340] cluster config:
	{Name:download-only-524000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-524000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:47:39.554484    2004 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:47:39.575414    2004 out.go:97] Downloading VM boot image ...
	I1003 19:47:39.575525    2004 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1003 19:47:56.655844    2004 out.go:97] Starting "download-only-524000" primary control-plane node in "download-only-524000" cluster
	I1003 19:47:56.655886    2004 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:47:56.948832    2004 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1003 19:47:56.948868    2004 cache.go:56] Caching tarball of preloaded images
	I1003 19:47:56.949281    2004 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:47:56.970842    2004 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1003 19:47:56.970865    2004 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1003 19:47:57.511778    2004 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-524000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-524000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-524000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-722000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-722000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit : (18.92145375s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1003 19:48:37.598683    2003 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1003 19:48:37.598722    2003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-722000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-722000: exit status 85 (297.720601ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-524000 | jenkins | v1.34.0 | 03 Oct 24 19:47 PDT |                     |
	|         | -p download-only-524000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 03 Oct 24 19:48 PDT | 03 Oct 24 19:48 PDT |
	| delete  | -p download-only-524000        | download-only-524000 | jenkins | v1.34.0 | 03 Oct 24 19:48 PDT | 03 Oct 24 19:48 PDT |
	| start   | -o=json --download-only        | download-only-722000 | jenkins | v1.34.0 | 03 Oct 24 19:48 PDT |                     |
	|         | -p download-only-722000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 19:48:18
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:48:18.735665    2032 out.go:345] Setting OutFile to fd 1 ...
	I1003 19:48:18.735880    2032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:48:18.735885    2032 out.go:358] Setting ErrFile to fd 2...
	I1003 19:48:18.735888    2032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:48:18.736070    2032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 19:48:18.737556    2032 out.go:352] Setting JSON to true
	I1003 19:48:18.765405    2032 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1068,"bootTime":1728009030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 19:48:18.765560    2032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 19:48:18.786897    2032 out.go:97] [download-only-722000] minikube v1.34.0 on Darwin 15.0.1
	I1003 19:48:18.787117    2032 notify.go:220] Checking for updates...
	I1003 19:48:18.808655    2032 out.go:169] MINIKUBE_LOCATION=19546
	I1003 19:48:18.829809    2032 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 19:48:18.852780    2032 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 19:48:18.873912    2032 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:48:18.895678    2032 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	W1003 19:48:18.937540    2032 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 19:48:18.938056    2032 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 19:48:18.970622    2032 out.go:97] Using the hyperkit driver based on user configuration
	I1003 19:48:18.970675    2032 start.go:297] selected driver: hyperkit
	I1003 19:48:18.970689    2032 start.go:901] validating driver "hyperkit" against <nil>
	I1003 19:48:18.970880    2032 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:48:18.971182    2032 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19546-1440/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1003 19:48:18.983294    2032 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1003 19:48:18.989506    2032 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 19:48:18.989539    2032 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1003 19:48:18.989568    2032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 19:48:18.994620    2032 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1003 19:48:18.994764    2032 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:48:18.994794    2032 cni.go:84] Creating CNI manager for ""
	I1003 19:48:18.994841    2032 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 19:48:18.994851    2032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 19:48:18.994918    2032 start.go:340] cluster config:
	{Name:download-only-722000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:48:18.995007    2032 iso.go:125] acquiring lock: {Name:mkff99aa7c8fccf1cce53982ea6ff54b0512813e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:48:19.016587    2032 out.go:97] Starting "download-only-722000" primary control-plane node in "download-only-722000" cluster
	I1003 19:48:19.016621    2032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 19:48:19.439408    2032 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1003 19:48:19.439446    2032 cache.go:56] Caching tarball of preloaded images
	I1003 19:48:19.439909    2032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 19:48:19.461615    2032 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1003 19:48:19.461648    2032 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I1003 19:48:20.015957    2032 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /Users/jenkins/minikube-integration/19546-1440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-722000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-722000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-722000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.99s)

                                                
                                                
=== RUN   TestBinaryMirror
I1003 19:48:38.834675    2003 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-975000 --alsologtostderr --binary-mirror http://127.0.0.1:49449 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-975000
--- PASS: TestBinaryMirror (0.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-675000
addons_test.go:945: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-675000: exit status 85 (195.934828ms)

                                                
                                                
-- stdout --
	* Profile "addons-675000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-675000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:956: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-675000
addons_test.go:956: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-675000: exit status 85 (215.412872ms)

                                                
                                                
-- stdout --
	* Profile "addons-675000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-675000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (261.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-675000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-amd64 start -p addons-675000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m21.904151087s)
--- PASS: TestAddons/Setup (261.90s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.13s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:830: volcano-controller stabilized in 12.249739ms
addons_test.go:814: volcano-scheduler stabilized in 12.278796ms
addons_test.go:822: volcano-admission stabilized in 12.411831ms
addons_test.go:836: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-9626n" [87f9ab88-d97c-4643-99a7-56044b824ed7] Running
addons_test.go:836: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004413801s
addons_test.go:840: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-28724" [a6925def-b346-4785-8c22-1916219f0cbb] Running
addons_test.go:840: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005999113s
addons_test.go:844: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-5scl4" [6bc6ecb0-2b72-4166-8b43-bc0dd8abdea6] Running
addons_test.go:844: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004992947s
addons_test.go:849: (dbg) Run:  kubectl --context addons-675000 delete -n volcano-system job volcano-admission-init
addons_test.go:855: (dbg) Run:  kubectl --context addons-675000 create -f testdata/vcjob.yaml
addons_test.go:863: (dbg) Run:  kubectl --context addons-675000 get vcjob -n my-volcano
addons_test.go:881: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d54dbf77-1a9f-4c24-a980-03588ac5a686] Pending
helpers_test.go:344: "test-job-nginx-0" [d54dbf77-1a9f-4c24-a980-03588ac5a686] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d54dbf77-1a9f-4c24-a980-03588ac5a686] Running
addons_test.go:881: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003123642s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable volcano --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable volcano --alsologtostderr -v=1: (10.806681872s)
--- PASS: TestAddons/serial/Volcano (40.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:570: (dbg) Run:  kubectl --context addons-675000 create ns new-namespace
addons_test.go:584: (dbg) Run:  kubectl --context addons-675000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:322: registry stabilized in 1.439611ms
addons_test.go:324: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-t7gnr" [42513073-7ca5-42a5-b201-17b80b3e762c] Running
I1003 20:01:54.084246    2003 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1003 20:01:54.084257    2003 kapi.go:107] duration metric: took 4.551984ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:324: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00271418s
addons_test.go:327: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zknpx" [6c1c888a-b7d9-47f6-af29-7f8d8b988913] Running
addons_test.go:327: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003949478s
addons_test.go:332: (dbg) Run:  kubectl --context addons-675000 delete po -l run=registry-test --now
addons_test.go:337: (dbg) Run:  kubectl --context addons-675000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:337: (dbg) Done: kubectl --context addons-675000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.216274809s)
addons_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 ip
2024/10/03 20:02:12 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.92s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:208: (dbg) Run:  kubectl --context addons-675000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:233: (dbg) Run:  kubectl --context addons-675000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:246: (dbg) Run:  kubectl --context addons-675000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [062b0ef1-88d8-4a59-8b67-f10f9d209442] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [062b0ef1-88d8-4a59-8b67-f10f9d209442] Running
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005179316s
I1003 20:03:38.166447    2003 kapi.go:150] Service nginx in namespace default found.
addons_test.go:263: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:287: (dbg) Run:  kubectl --context addons-675000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:292: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 ip
addons_test.go:298: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable ingress-dns --alsologtostderr -v=1: (1.078549074s)
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable ingress --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable ingress --alsologtostderr -v=1: (7.461123283s)
--- PASS: TestAddons/parallel/Ingress (19.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-k4jvw" [eb31c076-8227-49aa-aad1-454da838e318] Running
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00400564s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.489547203s)
--- PASS: TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                    
x
+
TestAddons/parallel/Logviewer (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Logviewer
=== PAUSE TestAddons/parallel/Logviewer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Logviewer
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: waiting 8m0s for pods matching "app=logviewer" in namespace "kube-system" ...
helpers_test.go:344: "logviewer-7c79c8bcc9-jsdxs" [3d49055c-6daf-44c3-af79-78f74043161f] Running
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: app=logviewer healthy within 6.005196738s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable logviewer --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Logviewer (6.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:395: metrics-server stabilized in 1.749018ms
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-48ksz" [fe622677-5138-4edd-9eeb-b148996f3934] Running
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003665848s
addons_test.go:403: (dbg) Run:  kubectl --context addons-675000 top pods -n kube-system
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1003 20:01:54.079714    2003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:489: csi-hostpath-driver pods stabilized in 4.558215ms
addons_test.go:492: (dbg) Run:  kubectl --context addons-675000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:497: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:502: (dbg) Run:  kubectl --context addons-675000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:507: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1a15b5ea-fe01-4589-8451-a3192dab2a60] Pending
helpers_test.go:344: "task-pv-pod" [1a15b5ea-fe01-4589-8451-a3192dab2a60] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1a15b5ea-fe01-4589-8451-a3192dab2a60] Running
addons_test.go:507: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004135876s
addons_test.go:512: (dbg) Run:  kubectl --context addons-675000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:517: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-675000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-675000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:522: (dbg) Run:  kubectl --context addons-675000 delete pod task-pv-pod
addons_test.go:528: (dbg) Run:  kubectl --context addons-675000 delete pvc hpvc
addons_test.go:534: (dbg) Run:  kubectl --context addons-675000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-675000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:549: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [92557da2-0d49-429e-ac2f-d0df81b1eca1] Pending
helpers_test.go:344: "task-pv-pod-restore" [92557da2-0d49-429e-ac2f-d0df81b1eca1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [92557da2-0d49-429e-ac2f-d0df81b1eca1] Running
addons_test.go:549: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005887199s
addons_test.go:554: (dbg) Run:  kubectl --context addons-675000 delete pod task-pv-pod-restore
addons_test.go:554: (dbg) Done: kubectl --context addons-675000 delete pod task-pv-pod-restore: (1.014031572s)
addons_test.go:558: (dbg) Run:  kubectl --context addons-675000 delete pvc hpvc-restore
addons_test.go:562: (dbg) Run:  kubectl --context addons-675000 delete volumesnapshot new-snapshot-demo
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.396749228s)
--- PASS: TestAddons/parallel/CSI (59.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:744: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-675000 --alsologtostderr -v=1
addons_test.go:744: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-675000 --alsologtostderr -v=1: (1.011044227s)
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-tx4jk" [1a2909fe-38b8-4de6-b77f-32449b53a80e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-tx4jk" [1a2909fe-38b8-4de6-b77f-32449b53a80e] Running
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.007086323s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable headlamp --alsologtostderr -v=1: (5.444117289s)
--- PASS: TestAddons/parallel/Headlamp (17.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-mmhz6" [067f9921-1738-46bf-b26a-e6b31e588ab2] Running
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00547126s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.38s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:894: (dbg) Run:  kubectl --context addons-675000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:900: (dbg) Run:  kubectl --context addons-675000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:904: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-675000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6dbd938e-4f6d-43ad-9c70-6cd071de20f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6dbd938e-4f6d-43ad-9c70-6cd071de20f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6dbd938e-4f6d-43ad-9c70-6cd071de20f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.007480998s
addons_test.go:912: (dbg) Run:  kubectl --context addons-675000 get pvc test-pvc -o=json
addons_test.go:921: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 ssh "cat /opt/local-path-provisioner/pvc-d78414f3-a034-4f13-a883-e2713774e684_default_test-pvc/file1"
addons_test.go:933: (dbg) Run:  kubectl --context addons-675000 delete pod test-local-path
addons_test.go:937: (dbg) Run:  kubectl --context addons-675000 delete pvc test-pvc
addons_test.go:990: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.813690607s)
--- PASS: TestAddons/parallel/LocalPath (52.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-97wl5" [50d759b1-dc22-4e0f-8268-74e62816f7df] Running
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004282212s
addons_test.go:972: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-675000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ws557" [b276624b-bd10-4eaf-811e-09bc5c2e1331] Running
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005838506s
addons_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 -p addons-675000 addons disable yakd --alsologtostderr -v=1
addons_test.go:984: (dbg) Done: out/minikube-darwin-amd64 -p addons-675000 addons disable yakd --alsologtostderr -v=1: (5.462506318s)
--- PASS: TestAddons/parallel/Yakd (10.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.96s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-675000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-675000: (5.385279044s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-675000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-675000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-675000
--- PASS: TestAddons/StoppedEnableDisable (5.96s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.32s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1003 21:04:43.365558    2003 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1003 21:04:43.365757    2003 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
W1003 21:04:44.159216    2003 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1003 21:04:44.159450    2003 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1003 21:04:44.159509    2003 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit
I1003 21:04:44.639062    2003 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740] Decompressors:map[bz2:0xc0004bbf70 gz:0xc0004bbf78 tar:0xc0004bbf20 tar.bz2:0xc0004bbf30 tar.gz:0xc0004bbf40 tar.xz:0xc0004bbf50 tar.zst:0xc0004bbf60 tbz2:0xc0004bbf30 tgz:0xc0004bbf40 txz:0xc0004bbf50 tzst:0xc0004bbf60 xz:0xc0004bbfc0 zip:0xc0004bbff0 zst:0xc0004bbfc8] Getters:map[file:0xc001d0de90 http:0xc0005a12c0 https:0xc0005a13b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1003 21:04:44.639098    2003 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit
I1003 21:04:47.483705    2003 install.go:79] stdout: 
W1003 21:04:47.483856    2003 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1003 21:04:47.483887    2003 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit]
I1003 21:04:47.505012    2003 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit]
I1003 21:04:47.525342    2003 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit]
I1003 21:04:47.544956    2003 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/001/docker-machine-driver-hyperkit]
I1003 21:04:47.583894    2003 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1003 21:04:47.584024    2003 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I1003 21:04:48.328013    2003 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1003 21:04:48.328047    2003 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1003 21:04:48.328098    2003 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1003 21:04:48.328141    2003 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit
I1003 21:04:48.712617    2003 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740 0xf5fb740] Decompressors:map[bz2:0xc0004bbf70 gz:0xc0004bbf78 tar:0xc0004bbf20 tar.bz2:0xc0004bbf30 tar.gz:0xc0004bbf40 tar.xz:0xc0004bbf50 tar.zst:0xc0004bbf60 tbz2:0xc0004bbf30 tgz:0xc0004bbf40 txz:0xc0004bbf50 tzst:0xc0004bbf60 xz:0xc0004bbfc0 zip:0xc0004bbff0 zst:0xc0004bbfc8] Getters:map[file:0xc0006becc0 http:0xc000051270 https:0xc000051360] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1003 21:04:48.712667    2003 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit
I1003 21:04:51.583380    2003 install.go:79] stdout: 
W1003 21:04:51.583517    2003 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1003 21:04:51.583554    2003 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit]
I1003 21:04:51.604420    2003 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit]
I1003 21:04:51.625514    2003 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit]
I1003 21:04:51.644889    2003 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1488224217/002/docker-machine-driver-hyperkit]
--- PASS: TestHyperKitDriverInstallOrUpdate (8.32s)

                                                
                                    
x
+
TestErrorSpam/setup (36.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-615000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-615000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 --driver=hyperkit : (36.612484569s)
--- PASS: TestErrorSpam/setup (36.61s)

                                                
                                    
x
+
TestErrorSpam/start (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 start --dry-run
--- PASS: TestErrorSpam/start (1.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.52s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 status
--- PASS: TestErrorSpam/status (0.52s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (153.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 stop: (3.36544003s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 stop: (1m15.233851191s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-615000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-615000 stop: (1m15.230256848s)
--- PASS: TestErrorSpam/stop (153.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19546-1440/.minikube/files/etc/test/nested/copy/2003/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-042000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-042000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (48.096114969s)
--- PASS: TestFunctional/serial/StartWithProxy (48.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1003 20:07:57.535493    2003 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-042000 --alsologtostderr -v=8
E1003 20:08:01.960631    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:01.968460    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:01.979761    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:02.002527    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:02.045679    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:02.128009    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:02.290177    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:02.612541    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:03.254758    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:04.536889    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:07.098442    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:12.219989    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:08:22.462262    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-042000 --alsologtostderr -v=8: (38.45216216s)
functional_test.go:663: soft start took 38.45278983s for "functional-042000" cluster.
I1003 20:08:35.988247    2003 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-042000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 cache add registry.k8s.io/pause:3.1: (3.71102506s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cache add registry.k8s.io/pause:3.3
E1003 20:08:42.945200    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 cache add registry.k8s.io/pause:3.3: (3.544715125s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 cache add registry.k8s.io/pause:latest: (2.486846331s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local1927995584/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cache add minikube-local-cache-test:functional-042000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cache delete minikube-local-cache-test:functional-042000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-042000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (157.595179ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 cache reload: (2.041653512s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 kubectl -- --context functional-042000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 kubectl -- --context functional-042000 get pods: (1.203047057s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-042000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-042000 get pods: (1.551527564s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.55s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (64.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-042000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1003 20:09:23.909308    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-042000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m4.514667417s)
functional_test.go:761: restart took 1m4.514803636s for "functional-042000" cluster.
I1003 20:09:57.621873    2003 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (64.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-042000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 logs: (2.659356819s)
--- PASS: TestFunctional/serial/LogsCmd (2.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd245269062/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd245269062/001/logs.txt: (2.754800156s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-042000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-042000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-042000: exit status 115 (279.611615ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:32395 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-042000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 config get cpus: exit status 14 (61.276866ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 config get cpus: exit status 14 (61.93147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-042000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-042000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3644: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-042000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-042000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (489.221247ms)

                                                
                                                
-- stdout --
	* [functional-042000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:11:06.058464    3622 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:06.058771    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:06.058777    3622 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:06.058781    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:06.058962    3622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:06.060447    3622 out.go:352] Setting JSON to false
	I1003 20:11:06.087931    3622 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2436,"bootTime":1728009030,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:06.088017    3622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:06.109702    3622 out.go:177] * [functional-042000] minikube v1.34.0 on Darwin 15.0.1
	I1003 20:11:06.151330    3622 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:06.151449    3622 notify.go:220] Checking for updates...
	I1003 20:11:06.193048    3622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:06.214322    3622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:06.235263    3622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:06.256327    3622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:06.277402    3622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:06.298732    3622 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:06.299208    3622 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:06.299257    3622 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:06.310469    3622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50646
	I1003 20:11:06.310869    3622 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:06.311316    3622 main.go:141] libmachine: Using API Version  1
	I1003 20:11:06.311330    3622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:06.311594    3622 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:06.311724    3622 main.go:141] libmachine: (functional-042000) Calling .DriverName
	I1003 20:11:06.311922    3622 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:06.312185    3622 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:06.312210    3622 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:06.322831    3622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50648
	I1003 20:11:06.323163    3622 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:06.323503    3622 main.go:141] libmachine: Using API Version  1
	I1003 20:11:06.323520    3622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:06.323736    3622 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:06.323910    3622 main.go:141] libmachine: (functional-042000) Calling .DriverName
	I1003 20:11:06.355342    3622 out.go:177] * Using the hyperkit driver based on existing profile
	I1003 20:11:06.376206    3622 start.go:297] selected driver: hyperkit
	I1003 20:11:06.376230    3622 start.go:901] validating driver "hyperkit" against &{Name:functional-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:functional-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:06.376393    3622 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:06.403499    3622 out.go:201] 
	W1003 20:11:06.424207    3622 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 20:11:06.445193    3622 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-042000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-042000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-042000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (510.349726ms)

                                                
                                                
-- stdout --
	* [functional-042000] minikube v1.34.0 sur Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:11:05.537844    3615 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:11:05.538137    3615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:05.538143    3615 out.go:358] Setting ErrFile to fd 2...
	I1003 20:11:05.538147    3615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:11:05.538348    3615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:11:05.540027    3615 out.go:352] Setting JSON to false
	I1003 20:11:05.568278    3615 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2435,"bootTime":1728009030,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 20:11:05.568409    3615 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:11:05.592079    3615 out.go:177] * [functional-042000] minikube v1.34.0 sur Darwin 15.0.1
	I1003 20:11:05.634883    3615 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:11:05.634891    3615 notify.go:220] Checking for updates...
	I1003 20:11:05.679903    3615 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	I1003 20:11:05.700814    3615 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 20:11:05.722035    3615 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:11:05.742749    3615 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	I1003 20:11:05.763780    3615 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:11:05.785590    3615 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:11:05.786283    3615 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:05.786338    3615 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:05.798457    3615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50641
	I1003 20:11:05.798924    3615 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:05.799329    3615 main.go:141] libmachine: Using API Version  1
	I1003 20:11:05.799358    3615 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:05.799606    3615 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:05.799754    3615 main.go:141] libmachine: (functional-042000) Calling .DriverName
	I1003 20:11:05.799957    3615 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:11:05.800221    3615 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:11:05.800250    3615 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:11:05.810933    3615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50643
	I1003 20:11:05.811269    3615 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:11:05.811640    3615 main.go:141] libmachine: Using API Version  1
	I1003 20:11:05.811654    3615 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:11:05.811864    3615 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:11:05.811989    3615 main.go:141] libmachine: (functional-042000) Calling .DriverName
	I1003 20:11:05.846490    3615 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1003 20:11:05.885965    3615 start.go:297] selected driver: hyperkit
	I1003 20:11:05.885996    3615 start.go:901] validating driver "hyperkit" against &{Name:functional-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:functional-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:11:05.886195    3615 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:11:05.913818    3615 out.go:201] 
	W1003 20:11:05.934751    3615 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 20:11:05.955794    3615 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-042000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-042000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d6x45" [023e2969-1355-49e9-bc76-4ecdd635cc93] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d6x45" [023e2969-1355-49e9-bc76-4ecdd635cc93] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.004283264s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:32460
functional_test.go:1675: http://192.169.0.4:32460: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-d6x45

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:32460
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.39s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0bc9e18f-b8c4-48c0-b827-98239f7ef9c1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004384378s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-042000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-042000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-042000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-042000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [224d173f-5c50-4599-9880-d1f2d4623237] Pending
helpers_test.go:344: "sp-pod" [224d173f-5c50-4599-9880-d1f2d4623237] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1003 20:10:45.833322    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [224d173f-5c50-4599-9880-d1f2d4623237] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.002306893s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-042000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-042000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-042000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ab657a2d-5434-4a36-ac70-2b7274c395b3] Pending
helpers_test.go:344: "sp-pod" [ab657a2d-5434-4a36-ac70-2b7274c395b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ab657a2d-5434-4a36-ac70-2b7274c395b3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003113906s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-042000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh -n functional-042000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cp functional-042000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd3216863286/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh -n functional-042000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh -n functional-042000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-042000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-9zz6f" [8f83cbbf-b105-449b-87d7-bbd15916cd5c] Pending
helpers_test.go:344: "mysql-6cdb49bbb-9zz6f" [8f83cbbf-b105-449b-87d7-bbd15916cd5c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-9zz6f" [8f83cbbf-b105-449b-87d7-bbd15916cd5c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004791391s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-042000 exec mysql-6cdb49bbb-9zz6f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-042000 exec mysql-6cdb49bbb-9zz6f -- mysql -ppassword -e "show databases;": exit status 1 (169.424557ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1003 20:10:32.019007    2003 retry.go:31] will retry after 1.368932722s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-042000 exec mysql-6cdb49bbb-9zz6f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-042000 exec mysql-6cdb49bbb-9zz6f -- mysql -ppassword -e "show databases;": exit status 1 (144.730294ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1003 20:10:33.533412    2003 retry.go:31] will retry after 1.854767352s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-042000 exec mysql-6cdb49bbb-9zz6f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2003/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo cat /etc/test/nested/copy/2003/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2003.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo cat /etc/ssl/certs/2003.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2003.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo cat /usr/share/ca-certificates/2003.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/20032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo cat /etc/ssl/certs/20032.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/20032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo cat /usr/share/ca-certificates/20032.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-042000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 ssh "sudo systemctl is-active crio": exit status 1 (139.556902ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-amd64 license: (1.427467205s)
--- PASS: TestFunctional/parallel/License (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-042000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-042000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-042000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-042000 image ls --format short --alsologtostderr:
I1003 20:11:18.556177    3744 out.go:345] Setting OutFile to fd 1 ...
I1003 20:11:18.556460    3744 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:18.556465    3744 out.go:358] Setting ErrFile to fd 2...
I1003 20:11:18.556469    3744 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:18.556637    3744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
I1003 20:11:18.557295    3744 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:18.557388    3744 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:18.557758    3744 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:18.557797    3744 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:18.568926    3744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50813
I1003 20:11:18.569375    3744 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:18.569800    3744 main.go:141] libmachine: Using API Version  1
I1003 20:11:18.569809    3744 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:18.570031    3744 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:18.570142    3744 main.go:141] libmachine: (functional-042000) Calling .GetState
I1003 20:11:18.570235    3744 main.go:141] libmachine: (functional-042000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:11:18.570311    3744 main.go:141] libmachine: (functional-042000) DBG | hyperkit pid from json: 3007
I1003 20:11:18.571720    3744 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:18.571745    3744 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:18.582721    3744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50815
I1003 20:11:18.583071    3744 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:18.583401    3744 main.go:141] libmachine: Using API Version  1
I1003 20:11:18.583418    3744 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:18.583650    3744 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:18.583759    3744 main.go:141] libmachine: (functional-042000) Calling .DriverName
I1003 20:11:18.583924    3744 ssh_runner.go:195] Run: systemctl --version
I1003 20:11:18.583941    3744 main.go:141] libmachine: (functional-042000) Calling .GetSSHHostname
I1003 20:11:18.584046    3744 main.go:141] libmachine: (functional-042000) Calling .GetSSHPort
I1003 20:11:18.584143    3744 main.go:141] libmachine: (functional-042000) Calling .GetSSHKeyPath
I1003 20:11:18.584230    3744 main.go:141] libmachine: (functional-042000) Calling .GetSSHUsername
I1003 20:11:18.584318    3744 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/functional-042000/id_rsa Username:docker}
I1003 20:11:18.614861    3744 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1003 20:11:18.638481    3744 main.go:141] libmachine: Making call to close driver server
I1003 20:11:18.638519    3744 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:18.638679    3744 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
I1003 20:11:18.638689    3744 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:18.638698    3744 main.go:141] libmachine: Making call to close connection to plugin binary
I1003 20:11:18.638707    3744 main.go:141] libmachine: Making call to close driver server
I1003 20:11:18.638713    3744 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:18.638841    3744 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
I1003 20:11:18.638848    3744 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:18.638868    3744 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-042000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest            | 7f553e8bbc897 | 192MB  |
| docker.io/kicbase/echo-server               | functional-042000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | cb8f91112b6b5 | 47MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-042000 | 618c541e0dc85 | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-042000 image ls --format table --alsologtostderr:
I1003 20:11:21.201091    3768 out.go:345] Setting OutFile to fd 1 ...
I1003 20:11:21.201317    3768 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:21.201323    3768 out.go:358] Setting ErrFile to fd 2...
I1003 20:11:21.201327    3768 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:21.201528    3768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
I1003 20:11:21.202207    3768 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:21.202306    3768 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:21.202655    3768 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:21.202697    3768 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:21.213397    3768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50842
I1003 20:11:21.213820    3768 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:21.214225    3768 main.go:141] libmachine: Using API Version  1
I1003 20:11:21.214235    3768 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:21.214439    3768 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:21.214543    3768 main.go:141] libmachine: (functional-042000) Calling .GetState
I1003 20:11:21.214623    3768 main.go:141] libmachine: (functional-042000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:11:21.214682    3768 main.go:141] libmachine: (functional-042000) DBG | hyperkit pid from json: 3007
I1003 20:11:21.216041    3768 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:21.216061    3768 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:21.226822    3768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50844
I1003 20:11:21.227157    3768 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:21.227500    3768 main.go:141] libmachine: Using API Version  1
I1003 20:11:21.227515    3768 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:21.227724    3768 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:21.227835    3768 main.go:141] libmachine: (functional-042000) Calling .DriverName
I1003 20:11:21.228001    3768 ssh_runner.go:195] Run: systemctl --version
I1003 20:11:21.228020    3768 main.go:141] libmachine: (functional-042000) Calling .GetSSHHostname
I1003 20:11:21.228115    3768 main.go:141] libmachine: (functional-042000) Calling .GetSSHPort
I1003 20:11:21.228205    3768 main.go:141] libmachine: (functional-042000) Calling .GetSSHKeyPath
I1003 20:11:21.228335    3768 main.go:141] libmachine: (functional-042000) Calling .GetSSHUsername
I1003 20:11:21.228445    3768 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/functional-042000/id_rsa Username:docker}
I1003 20:11:21.258894    3768 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1003 20:11:21.275910    3768 main.go:141] libmachine: Making call to close driver server
I1003 20:11:21.275918    3768 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:21.276079    3768 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:21.276088    3768 main.go:141] libmachine: Making call to close connection to plugin binary
I1003 20:11:21.276092    3768 main.go:141] libmachine: Making call to close driver server
I1003 20:11:21.276097    3768 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:21.276101    3768 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
I1003 20:11:21.276223    3768 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
I1003 20:11:21.276282    3768 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:21.276317    3768 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-042000 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-042000"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cb
be954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.
io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"618c541e
0dc8564cf7499cb96a48294685eadafbb4b41841fbe96386a23c491d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-042000"],"size":"30"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-042000 image ls --format json --alsologtostderr:
I1003 20:11:21.041508    3764 out.go:345] Setting OutFile to fd 1 ...
I1003 20:11:21.041855    3764 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:21.041860    3764 out.go:358] Setting ErrFile to fd 2...
I1003 20:11:21.041864    3764 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:21.042034    3764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
I1003 20:11:21.042647    3764 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:21.042751    3764 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:21.043106    3764 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:21.043147    3764 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:21.053717    3764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50837
I1003 20:11:21.054092    3764 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:21.054516    3764 main.go:141] libmachine: Using API Version  1
I1003 20:11:21.054527    3764 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:21.054744    3764 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:21.054911    3764 main.go:141] libmachine: (functional-042000) Calling .GetState
I1003 20:11:21.055031    3764 main.go:141] libmachine: (functional-042000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:11:21.055085    3764 main.go:141] libmachine: (functional-042000) DBG | hyperkit pid from json: 3007
I1003 20:11:21.056495    3764 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:21.056521    3764 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:21.067500    3764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50839
I1003 20:11:21.067858    3764 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:21.068172    3764 main.go:141] libmachine: Using API Version  1
I1003 20:11:21.068183    3764 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:21.068398    3764 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:21.068514    3764 main.go:141] libmachine: (functional-042000) Calling .DriverName
I1003 20:11:21.068695    3764 ssh_runner.go:195] Run: systemctl --version
I1003 20:11:21.068712    3764 main.go:141] libmachine: (functional-042000) Calling .GetSSHHostname
I1003 20:11:21.068818    3764 main.go:141] libmachine: (functional-042000) Calling .GetSSHPort
I1003 20:11:21.068911    3764 main.go:141] libmachine: (functional-042000) Calling .GetSSHKeyPath
I1003 20:11:21.069010    3764 main.go:141] libmachine: (functional-042000) Calling .GetSSHUsername
I1003 20:11:21.069114    3764 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/functional-042000/id_rsa Username:docker}
I1003 20:11:21.098767    3764 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1003 20:11:21.115319    3764 main.go:141] libmachine: Making call to close driver server
I1003 20:11:21.115328    3764 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:21.115475    3764 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:21.115484    3764 main.go:141] libmachine: Making call to close connection to plugin binary
I1003 20:11:21.115489    3764 main.go:141] libmachine: Making call to close driver server
I1003 20:11:21.115494    3764 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:21.115610    3764 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:21.115617    3764 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
I1003 20:11:21.115620    3764 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-042000 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 618c541e0dc8564cf7499cb96a48294685eadafbb4b41841fbe96386a23c491d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-042000
size: "30"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-042000
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-042000 image ls --format yaml --alsologtostderr:
I1003 20:11:18.723842    3749 out.go:345] Setting OutFile to fd 1 ...
I1003 20:11:18.724074    3749 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:18.724080    3749 out.go:358] Setting ErrFile to fd 2...
I1003 20:11:18.724084    3749 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:18.724249    3749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
I1003 20:11:18.724893    3749 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:18.724999    3749 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:18.725370    3749 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:18.725411    3749 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:18.736211    3749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50818
I1003 20:11:18.736634    3749 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:18.737031    3749 main.go:141] libmachine: Using API Version  1
I1003 20:11:18.737040    3749 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:18.737310    3749 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:18.737428    3749 main.go:141] libmachine: (functional-042000) Calling .GetState
I1003 20:11:18.737511    3749 main.go:141] libmachine: (functional-042000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:11:18.737581    3749 main.go:141] libmachine: (functional-042000) DBG | hyperkit pid from json: 3007
I1003 20:11:18.739017    3749 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:18.739043    3749 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:18.749978    3749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50820
I1003 20:11:18.750313    3749 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:18.750665    3749 main.go:141] libmachine: Using API Version  1
I1003 20:11:18.750679    3749 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:18.750879    3749 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:18.751001    3749 main.go:141] libmachine: (functional-042000) Calling .DriverName
I1003 20:11:18.751192    3749 ssh_runner.go:195] Run: systemctl --version
I1003 20:11:18.751211    3749 main.go:141] libmachine: (functional-042000) Calling .GetSSHHostname
I1003 20:11:18.751295    3749 main.go:141] libmachine: (functional-042000) Calling .GetSSHPort
I1003 20:11:18.751377    3749 main.go:141] libmachine: (functional-042000) Calling .GetSSHKeyPath
I1003 20:11:18.751466    3749 main.go:141] libmachine: (functional-042000) Calling .GetSSHUsername
I1003 20:11:18.751559    3749 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/functional-042000/id_rsa Username:docker}
I1003 20:11:18.784052    3749 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1003 20:11:18.831916    3749 main.go:141] libmachine: Making call to close driver server
I1003 20:11:18.831925    3749 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:18.832080    3749 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:18.832093    3749 main.go:141] libmachine: Making call to close connection to plugin binary
I1003 20:11:18.832106    3749 main.go:141] libmachine: Making call to close driver server
I1003 20:11:18.832111    3749 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:18.832170    3749 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
I1003 20:11:18.832266    3749 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:18.832274    3749 main.go:141] libmachine: Making call to close connection to plugin binary
I1003 20:11:18.832273    3749 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 ssh pgrep buildkitd: exit status 1 (146.699793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image build -t localhost/my-image:functional-042000 testdata/build --alsologtostderr
2024/10/03 20:11:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-042000 image build -t localhost/my-image:functional-042000 testdata/build --alsologtostderr: (4.780809152s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-042000 image build -t localhost/my-image:functional-042000 testdata/build --alsologtostderr:
I1003 20:11:19.065157    3758 out.go:345] Setting OutFile to fd 1 ...
I1003 20:11:19.065474    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:19.065480    3758 out.go:358] Setting ErrFile to fd 2...
I1003 20:11:19.065483    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:11:19.065677    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
I1003 20:11:19.066315    3758 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:19.066964    3758 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:11:19.067307    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:19.067342    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:19.078072    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50830
I1003 20:11:19.078522    3758 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:19.078960    3758 main.go:141] libmachine: Using API Version  1
I1003 20:11:19.078973    3758 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:19.079227    3758 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:19.079335    3758 main.go:141] libmachine: (functional-042000) Calling .GetState
I1003 20:11:19.079416    3758 main.go:141] libmachine: (functional-042000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1003 20:11:19.079492    3758 main.go:141] libmachine: (functional-042000) DBG | hyperkit pid from json: 3007
I1003 20:11:19.080854    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1003 20:11:19.080880    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1003 20:11:19.091971    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50832
I1003 20:11:19.092314    3758 main.go:141] libmachine: () Calling .GetVersion
I1003 20:11:19.092663    3758 main.go:141] libmachine: Using API Version  1
I1003 20:11:19.092678    3758 main.go:141] libmachine: () Calling .SetConfigRaw
I1003 20:11:19.092893    3758 main.go:141] libmachine: () Calling .GetMachineName
I1003 20:11:19.093020    3758 main.go:141] libmachine: (functional-042000) Calling .DriverName
I1003 20:11:19.093183    3758 ssh_runner.go:195] Run: systemctl --version
I1003 20:11:19.093200    3758 main.go:141] libmachine: (functional-042000) Calling .GetSSHHostname
I1003 20:11:19.093277    3758 main.go:141] libmachine: (functional-042000) Calling .GetSSHPort
I1003 20:11:19.093360    3758 main.go:141] libmachine: (functional-042000) Calling .GetSSHKeyPath
I1003 20:11:19.093442    3758 main.go:141] libmachine: (functional-042000) Calling .GetSSHUsername
I1003 20:11:19.093545    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/functional-042000/id_rsa Username:docker}
I1003 20:11:19.131944    3758 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3736960682.tar
I1003 20:11:19.132040    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 20:11:19.141809    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3736960682.tar
I1003 20:11:19.145283    3758 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3736960682.tar: stat -c "%s %y" /var/lib/minikube/build/build.3736960682.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3736960682.tar': No such file or directory
I1003 20:11:19.145309    3758 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3736960682.tar --> /var/lib/minikube/build/build.3736960682.tar (3072 bytes)
I1003 20:11:19.165870    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3736960682
I1003 20:11:19.174548    3758 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3736960682 -xf /var/lib/minikube/build/build.3736960682.tar
I1003 20:11:19.182971    3758 docker.go:360] Building image: /var/lib/minikube/build/build.3736960682
I1003 20:11:19.183044    3758 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-042000 /var/lib/minikube/build/build.3736960682
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:cb6f1063930d8703b141c0a710892f5f473c5321d6aff30129f11618e6651bf1 done
#8 naming to localhost/my-image:functional-042000 done
#8 DONE 0.0s
I1003 20:11:23.743896    3758 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-042000 /var/lib/minikube/build/build.3736960682: (4.560798873s)
I1003 20:11:23.743971    3758 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3736960682
I1003 20:11:23.752387    3758 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3736960682.tar
I1003 20:11:23.760439    3758 build_images.go:217] Built localhost/my-image:functional-042000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3736960682.tar
I1003 20:11:23.760460    3758 build_images.go:133] succeeded building to: functional-042000
I1003 20:11:23.760463    3758 build_images.go:134] failed building to: 
I1003 20:11:23.760481    3758 main.go:141] libmachine: Making call to close driver server
I1003 20:11:23.760488    3758 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:23.760639    3758 main.go:141] libmachine: (functional-042000) DBG | Closing plugin on server side
I1003 20:11:23.760642    3758 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:23.760650    3758 main.go:141] libmachine: Making call to close connection to plugin binary
I1003 20:11:23.760657    3758 main.go:141] libmachine: Making call to close driver server
I1003 20:11:23.760663    3758 main.go:141] libmachine: (functional-042000) Calling .Close
I1003 20:11:23.760827    3758 main.go:141] libmachine: Successfully made call to close driver server
I1003 20:11:23.760837    3758 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.689119074s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-042000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-042000 docker-env) && out/minikube-darwin-amd64 status -p functional-042000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-042000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image load --daemon kicbase/echo-server:functional-042000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image load --daemon kicbase/echo-server:functional-042000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-042000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image load --daemon kicbase/echo-server:functional-042000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image save kicbase/echo-server:functional-042000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image rm kicbase/echo-server:functional-042000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-042000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 image save --daemon kicbase/echo-server:functional-042000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-042000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-042000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-042000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-042000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-042000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3435: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-042000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-042000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4b4c58df-ca6c-4d8b-b5b4-ffeaba4127a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4b4c58df-ca6c-4d8b-b5b4-ffeaba4127a4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.00524045s
I1003 20:10:36.410077    2003 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-042000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.228.211 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1003 20:10:36.509558    2003 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1003 20:10:36.592328    2003 config.go:182] Loaded profile config "functional-042000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-042000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-042000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-042000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nw48f" [2fe7dec4-d16d-4242-89ae-1dd984c27fa8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nw48f" [2fe7dec4-d16d-4242-89ae-1dd984c27fa8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003979009s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 service list -o json
functional_test.go:1494: Took "782.874263ms" to run "out/minikube-darwin-amd64 -p functional-042000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:30436
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:30436
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "233.560858ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "84.802971ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "218.20184ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "84.706387ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3279223594/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728011463363102000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3279223594/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728011463363102000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3279223594/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728011463363102000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3279223594/001/test-1728011463363102000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (166.466397ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 20:11:03.530479    2003 retry.go:31] will retry after 265.428461ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 03:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 03:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 03:11 test-1728011463363102000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh cat /mount-9p/test-1728011463363102000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-042000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a8710c70-846b-4063-b9aa-7befae56216d] Pending
helpers_test.go:344: "busybox-mount" [a8710c70-846b-4063-b9aa-7befae56216d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a8710c70-846b-4063-b9aa-7befae56216d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a8710c70-846b-4063-b9aa-7befae56216d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005162604s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-042000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3279223594/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port867681939/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.470862ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 20:11:14.417690    2003 retry.go:31] will retry after 451.073794ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port867681939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 ssh "sudo umount -f /mount-9p": exit status 1 (139.607022ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-042000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port867681939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T" /mount1: exit status 1 (174.925833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 20:11:15.957730    2003 retry.go:31] will retry after 427.2155ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-042000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-042000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-042000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4234344902/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-042000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-042000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-042000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-214000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (158.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 stop -v=7 --alsologtostderr
E1003 20:35:10.878006    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-darwin-amd64 -p ha-214000 stop -v=7 --alsologtostderr: (2m38.538778298s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-214000 status -v=7 --alsologtostderr: exit status 7 (104.218799ms)

                                                
                                                
-- stdout --
	ha-214000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-214000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-214000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:35:48.201229    4946 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:48.201459    4946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.201464    4946 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:48.201468    4946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:48.201659    4946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:35:48.201848    4946 out.go:352] Setting JSON to false
	I1003 20:35:48.201871    4946 mustload.go:65] Loading cluster: ha-214000
	I1003 20:35:48.201909    4946 notify.go:220] Checking for updates...
	I1003 20:35:48.202217    4946 config.go:182] Loaded profile config "ha-214000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:48.202238    4946 status.go:174] checking status of ha-214000 ...
	I1003 20:35:48.202682    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.202736    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.214114    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51748
	I1003 20:35:48.214537    4946 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.214949    4946 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.214959    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.215226    4946 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.215368    4946 main.go:141] libmachine: (ha-214000) Calling .GetState
	I1003 20:35:48.215471    4946 main.go:141] libmachine: (ha-214000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.215536    4946 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid from json: 4822
	I1003 20:35:48.216555    4946 main.go:141] libmachine: (ha-214000) DBG | hyperkit pid 4822 missing from process table
	I1003 20:35:48.216576    4946 status.go:371] ha-214000 host status = "Stopped" (err=<nil>)
	I1003 20:35:48.216582    4946 status.go:384] host is not running, skipping remaining checks
	I1003 20:35:48.216586    4946 status.go:176] ha-214000 status: &{Name:ha-214000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:35:48.216602    4946 status.go:174] checking status of ha-214000-m02 ...
	I1003 20:35:48.216861    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.216886    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.227645    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51750
	I1003 20:35:48.228003    4946 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.228398    4946 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.228425    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.228672    4946 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.228790    4946 main.go:141] libmachine: (ha-214000-m02) Calling .GetState
	I1003 20:35:48.228905    4946 main.go:141] libmachine: (ha-214000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.228968    4946 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid from json: 4274
	I1003 20:35:48.229966    4946 main.go:141] libmachine: (ha-214000-m02) DBG | hyperkit pid 4274 missing from process table
	I1003 20:35:48.229995    4946 status.go:371] ha-214000-m02 host status = "Stopped" (err=<nil>)
	I1003 20:35:48.230001    4946 status.go:384] host is not running, skipping remaining checks
	I1003 20:35:48.230005    4946 status.go:176] ha-214000-m02 status: &{Name:ha-214000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:35:48.230025    4946 status.go:174] checking status of ha-214000-m03 ...
	I1003 20:35:48.230296    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:35:48.230333    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:35:48.241178    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51752
	I1003 20:35:48.241490    4946 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:35:48.241857    4946 main.go:141] libmachine: Using API Version  1
	I1003 20:35:48.241880    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:35:48.242122    4946 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:35:48.242244    4946 main.go:141] libmachine: (ha-214000-m03) Calling .GetState
	I1003 20:35:48.242346    4946 main.go:141] libmachine: (ha-214000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:35:48.242413    4946 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid from json: 4114
	I1003 20:35:48.243452    4946 main.go:141] libmachine: (ha-214000-m03) DBG | hyperkit pid 4114 missing from process table
	I1003 20:35:48.243475    4946 status.go:371] ha-214000-m03 host status = "Stopped" (err=<nil>)
	I1003 20:35:48.243481    4946 status.go:384] host is not running, skipping remaining checks
	I1003 20:35:48.243485    4946 status.go:176] ha-214000-m03 status: &{Name:ha-214000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (158.64s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-540000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-540000 --driver=hyperkit : (37.813394804s)
--- PASS: TestImageBuild/serial/Setup (37.81s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-540000
E1003 20:41:05.072564    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-540000: (4.627013408s)
--- PASS: TestImageBuild/serial/NormalBuild (4.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-540000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-540000: (1.092047838s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-540000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-540000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-586000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-586000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m21.080573576s)
--- PASS: TestJSONOutput/start/Command (81.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-586000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-586000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-586000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-586000 --output=json --user=testUser: (8.340337592s)
--- PASS: TestJSONOutput/stop/Command (8.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.61s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-217000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-217000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (371.258575ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f162388-c7d7-4085-b1f5-b9d0a797768a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-217000] minikube v1.34.0 on Darwin 15.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8acb8b81-fa4d-48ce-8aae-76418aa7ded4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"6b31bb15-0e17-433a-bcb1-6cdfd65e7516","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig"}}
	{"specversion":"1.0","id":"2ba467aa-d646-4d8b-951b-ce164ec75252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"e28fd6ad-aa08-4b5f-b069-0b1c1eacd71a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b182623b-a97b-4728-b61b-2c38c5c02251","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube"}}
	{"specversion":"1.0","id":"12d13cfc-3c6a-4885-ae0b-3c21bea7bd83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cb4e22f6-fa39-44ff-bc2d-bb90954c5e42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-217000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-217000
--- PASS: TestErrorJSONOutput (0.61s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (90.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-730000 --driver=hyperkit 
E1003 20:43:01.992597    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-730000 --driver=hyperkit : (39.143479636s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-744000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-744000 --driver=hyperkit : (40.224040546s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-730000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-744000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-744000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-744000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-744000: (5.262959407s)
helpers_test.go:175: Cleaning up "first-730000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-730000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-730000: (5.26987307s)
--- PASS: TestMinikubeProfile (90.83s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-276000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E1003 20:48:01.992943    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:48:13.949143    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-276000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m58.369637996s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-276000 -- rollout status deployment/busybox: (7.190760852s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-4tmxp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-gctx2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-4tmxp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-gctx2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-4tmxp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-gctx2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-4tmxp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-4tmxp -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-gctx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-276000 -- exec busybox-7dff88458-gctx2 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-276000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-276000 -v 3 --alsologtostderr: (55.254836441s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-276000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp testdata/cp-test.txt multinode-276000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1556108057/001/cp-test_multinode-276000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000:/home/docker/cp-test.txt multinode-276000-m02:/home/docker/cp-test_multinode-276000_multinode-276000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m02 "sudo cat /home/docker/cp-test_multinode-276000_multinode-276000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000:/home/docker/cp-test.txt multinode-276000-m03:/home/docker/cp-test_multinode-276000_multinode-276000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m03 "sudo cat /home/docker/cp-test_multinode-276000_multinode-276000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp testdata/cp-test.txt multinode-276000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1556108057/001/cp-test_multinode-276000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000-m02:/home/docker/cp-test.txt multinode-276000:/home/docker/cp-test_multinode-276000-m02_multinode-276000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000 "sudo cat /home/docker/cp-test_multinode-276000-m02_multinode-276000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000-m02:/home/docker/cp-test.txt multinode-276000-m03:/home/docker/cp-test_multinode-276000-m02_multinode-276000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m03 "sudo cat /home/docker/cp-test_multinode-276000-m02_multinode-276000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp testdata/cp-test.txt multinode-276000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1556108057/001/cp-test_multinode-276000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000-m03:/home/docker/cp-test.txt multinode-276000:/home/docker/cp-test_multinode-276000-m03_multinode-276000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000 "sudo cat /home/docker/cp-test_multinode-276000-m03_multinode-276000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 cp multinode-276000-m03:/home/docker/cp-test.txt multinode-276000-m02:/home/docker/cp-test_multinode-276000-m03_multinode-276000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 ssh -n multinode-276000-m02 "sudo cat /home/docker/cp-test_multinode-276000-m03_multinode-276000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-276000 node stop m03: (2.352023793s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-276000 status: exit status 7 (274.265524ms)

                                                
                                                
-- stdout --
	multinode-276000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-276000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-276000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr: exit status 7 (276.531765ms)

                                                
                                                
-- stdout --
	multinode-276000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-276000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-276000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:49:52.841644    5761 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:49:52.841978    5761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:49:52.841983    5761 out.go:358] Setting ErrFile to fd 2...
	I1003 20:49:52.841987    5761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:49:52.842161    5761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:49:52.842338    5761 out.go:352] Setting JSON to false
	I1003 20:49:52.842359    5761 mustload.go:65] Loading cluster: multinode-276000
	I1003 20:49:52.842406    5761 notify.go:220] Checking for updates...
	I1003 20:49:52.843680    5761 config.go:182] Loaded profile config "multinode-276000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:49:52.843702    5761 status.go:174] checking status of multinode-276000 ...
	I1003 20:49:52.844113    5761 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:49:52.844151    5761 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:49:52.855552    5761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52783
	I1003 20:49:52.855908    5761 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:49:52.856324    5761 main.go:141] libmachine: Using API Version  1
	I1003 20:49:52.856342    5761 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:49:52.856597    5761 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:49:52.856716    5761 main.go:141] libmachine: (multinode-276000) Calling .GetState
	I1003 20:49:52.856802    5761 main.go:141] libmachine: (multinode-276000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:49:52.856871    5761 main.go:141] libmachine: (multinode-276000) DBG | hyperkit pid from json: 5453
	I1003 20:49:52.858139    5761 status.go:371] multinode-276000 host status = "Running" (err=<nil>)
	I1003 20:49:52.858158    5761 host.go:66] Checking if "multinode-276000" exists ...
	I1003 20:49:52.858447    5761 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:49:52.858472    5761 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:49:52.869508    5761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52785
	I1003 20:49:52.869814    5761 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:49:52.870153    5761 main.go:141] libmachine: Using API Version  1
	I1003 20:49:52.870165    5761 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:49:52.870365    5761 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:49:52.870471    5761 main.go:141] libmachine: (multinode-276000) Calling .GetIP
	I1003 20:49:52.870564    5761 host.go:66] Checking if "multinode-276000" exists ...
	I1003 20:49:52.870827    5761 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:49:52.870846    5761 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:49:52.881766    5761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52787
	I1003 20:49:52.882098    5761 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:49:52.882441    5761 main.go:141] libmachine: Using API Version  1
	I1003 20:49:52.882461    5761 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:49:52.882663    5761 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:49:52.882796    5761 main.go:141] libmachine: (multinode-276000) Calling .DriverName
	I1003 20:49:52.882969    5761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:49:52.882992    5761 main.go:141] libmachine: (multinode-276000) Calling .GetSSHHostname
	I1003 20:49:52.883075    5761 main.go:141] libmachine: (multinode-276000) Calling .GetSSHPort
	I1003 20:49:52.883194    5761 main.go:141] libmachine: (multinode-276000) Calling .GetSSHKeyPath
	I1003 20:49:52.883285    5761 main.go:141] libmachine: (multinode-276000) Calling .GetSSHUsername
	I1003 20:49:52.883360    5761 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/multinode-276000/id_rsa Username:docker}
	I1003 20:49:52.922444    5761 ssh_runner.go:195] Run: systemctl --version
	I1003 20:49:52.926625    5761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:49:52.937085    5761 kubeconfig.go:125] found "multinode-276000" server: "https://192.169.0.13:8443"
	I1003 20:49:52.937107    5761 api_server.go:166] Checking apiserver status ...
	I1003 20:49:52.937158    5761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:49:52.947595    5761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1928/cgroup
	W1003 20:49:52.954820    5761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1928/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:49:52.954880    5761 ssh_runner.go:195] Run: ls
	I1003 20:49:52.958070    5761 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1003 20:49:52.961399    5761 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1003 20:49:52.961411    5761 status.go:463] multinode-276000 apiserver status = Running (err=<nil>)
	I1003 20:49:52.961417    5761 status.go:176] multinode-276000 status: &{Name:multinode-276000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:49:52.961426    5761 status.go:174] checking status of multinode-276000-m02 ...
	I1003 20:49:52.961690    5761 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:49:52.961710    5761 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:49:52.972863    5761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52791
	I1003 20:49:52.973232    5761 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:49:52.973614    5761 main.go:141] libmachine: Using API Version  1
	I1003 20:49:52.973637    5761 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:49:52.973884    5761 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:49:52.974193    5761 main.go:141] libmachine: (multinode-276000-m02) Calling .GetState
	I1003 20:49:52.974287    5761 main.go:141] libmachine: (multinode-276000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:49:52.974358    5761 main.go:141] libmachine: (multinode-276000-m02) DBG | hyperkit pid from json: 5474
	I1003 20:49:52.975672    5761 status.go:371] multinode-276000-m02 host status = "Running" (err=<nil>)
	I1003 20:49:52.975681    5761 host.go:66] Checking if "multinode-276000-m02" exists ...
	I1003 20:49:52.975947    5761 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:49:52.975969    5761 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:49:52.986879    5761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52793
	I1003 20:49:52.987202    5761 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:49:52.987553    5761 main.go:141] libmachine: Using API Version  1
	I1003 20:49:52.987565    5761 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:49:52.987785    5761 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:49:52.987900    5761 main.go:141] libmachine: (multinode-276000-m02) Calling .GetIP
	I1003 20:49:52.987985    5761 host.go:66] Checking if "multinode-276000-m02" exists ...
	I1003 20:49:52.988242    5761 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:49:52.988264    5761 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:49:52.999235    5761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52795
	I1003 20:49:52.999580    5761 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:49:52.999930    5761 main.go:141] libmachine: Using API Version  1
	I1003 20:49:52.999939    5761 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:49:53.000143    5761 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:49:53.000254    5761 main.go:141] libmachine: (multinode-276000-m02) Calling .DriverName
	I1003 20:49:53.000398    5761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 20:49:53.000409    5761 main.go:141] libmachine: (multinode-276000-m02) Calling .GetSSHHostname
	I1003 20:49:53.000511    5761 main.go:141] libmachine: (multinode-276000-m02) Calling .GetSSHPort
	I1003 20:49:53.000597    5761 main.go:141] libmachine: (multinode-276000-m02) Calling .GetSSHKeyPath
	I1003 20:49:53.000674    5761 main.go:141] libmachine: (multinode-276000-m02) Calling .GetSSHUsername
	I1003 20:49:53.000750    5761 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1440/.minikube/machines/multinode-276000-m02/id_rsa Username:docker}
	I1003 20:49:53.031786    5761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:49:53.042764    5761 status.go:176] multinode-276000-m02 status: &{Name:multinode-276000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:49:53.042779    5761 status.go:174] checking status of multinode-276000-m03 ...
	I1003 20:49:53.043061    5761 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:49:53.043085    5761 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:49:53.054307    5761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52798
	I1003 20:49:53.054657    5761 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:49:53.055001    5761 main.go:141] libmachine: Using API Version  1
	I1003 20:49:53.055015    5761 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:49:53.055239    5761 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:49:53.055340    5761 main.go:141] libmachine: (multinode-276000-m03) Calling .GetState
	I1003 20:49:53.055421    5761 main.go:141] libmachine: (multinode-276000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:49:53.055496    5761 main.go:141] libmachine: (multinode-276000-m03) DBG | hyperkit pid from json: 5545
	I1003 20:49:53.056762    5761 main.go:141] libmachine: (multinode-276000-m03) DBG | hyperkit pid 5545 missing from process table
	I1003 20:49:53.056793    5761 status.go:371] multinode-276000-m03 host status = "Stopped" (err=<nil>)
	I1003 20:49:53.056799    5761 status.go:384] host is not running, skipping remaining checks
	I1003 20:49:53.056803    5761 status.go:176] multinode-276000-m03 status: &{Name:multinode-276000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.90s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 node start m03 -v=7 --alsologtostderr
E1003 20:50:10.875891    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-276000 node start m03 -v=7 --alsologtostderr: (41.203652866s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (179.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-276000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-276000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-276000: (18.816587323s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-276000 --wait=true -v=8 --alsologtostderr
E1003 20:53:01.991541    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-276000 --wait=true -v=8 --alsologtostderr: (2m40.760938451s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-276000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (179.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-276000 node delete m03: (2.951723294s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-276000 stop: (16.619112857s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-276000 status: exit status 7 (90.112975ms)

                                                
                                                
-- stdout --
	multinode-276000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-276000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr: exit status 7 (89.988324ms)

                                                
                                                
-- stdout --
	multinode-276000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-276000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:53:54.478534    5916 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:53:54.478764    5916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:54.478770    5916 out.go:358] Setting ErrFile to fd 2...
	I1003 20:53:54.478774    5916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:54.478955    5916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1440/.minikube/bin
	I1003 20:53:54.479133    5916 out.go:352] Setting JSON to false
	I1003 20:53:54.479158    5916 mustload.go:65] Loading cluster: multinode-276000
	I1003 20:53:54.479194    5916 notify.go:220] Checking for updates...
	I1003 20:53:54.479522    5916 config.go:182] Loaded profile config "multinode-276000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:53:54.479540    5916 status.go:174] checking status of multinode-276000 ...
	I1003 20:53:54.479974    5916 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:53:54.480013    5916 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:53:54.491346    5916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53028
	I1003 20:53:54.491765    5916 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:53:54.492179    5916 main.go:141] libmachine: Using API Version  1
	I1003 20:53:54.492197    5916 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:53:54.492450    5916 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:53:54.492583    5916 main.go:141] libmachine: (multinode-276000) Calling .GetState
	I1003 20:53:54.492676    5916 main.go:141] libmachine: (multinode-276000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:53:54.492742    5916 main.go:141] libmachine: (multinode-276000) DBG | hyperkit pid from json: 5823
	I1003 20:53:54.493754    5916 main.go:141] libmachine: (multinode-276000) DBG | hyperkit pid 5823 missing from process table
	I1003 20:53:54.493809    5916 status.go:371] multinode-276000 host status = "Stopped" (err=<nil>)
	I1003 20:53:54.493821    5916 status.go:384] host is not running, skipping remaining checks
	I1003 20:53:54.493825    5916 status.go:176] multinode-276000 status: &{Name:multinode-276000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:53:54.493845    5916 status.go:174] checking status of multinode-276000-m02 ...
	I1003 20:53:54.494128    5916 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1003 20:53:54.494159    5916 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1003 20:53:54.504920    5916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53030
	I1003 20:53:54.505267    5916 main.go:141] libmachine: () Calling .GetVersion
	I1003 20:53:54.505634    5916 main.go:141] libmachine: Using API Version  1
	I1003 20:53:54.505658    5916 main.go:141] libmachine: () Calling .SetConfigRaw
	I1003 20:53:54.505876    5916 main.go:141] libmachine: () Calling .GetMachineName
	I1003 20:53:54.506004    5916 main.go:141] libmachine: (multinode-276000-m02) Calling .GetState
	I1003 20:53:54.506102    5916 main.go:141] libmachine: (multinode-276000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1003 20:53:54.506165    5916 main.go:141] libmachine: (multinode-276000-m02) DBG | hyperkit pid from json: 5837
	I1003 20:53:54.507186    5916 main.go:141] libmachine: (multinode-276000-m02) DBG | hyperkit pid 5837 missing from process table
	I1003 20:53:54.507239    5916 status.go:371] multinode-276000-m02 host status = "Stopped" (err=<nil>)
	I1003 20:53:54.507248    5916 status.go:384] host is not running, skipping remaining checks
	I1003 20:53:54.507252    5916 status.go:176] multinode-276000-m02 status: &{Name:multinode-276000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (122.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-276000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1003 20:55:10.877168    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-276000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m2.206539198s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-276000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (122.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-276000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-276000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-276000-m02 --driver=hyperkit : exit status 14 (641.764668ms)

                                                
                                                
-- stdout --
	* [multinode-276000-m02] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-276000-m02' is duplicated with machine name 'multinode-276000-m02' in profile 'multinode-276000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-276000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-276000-m03 --driver=hyperkit : (36.70308545s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-276000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-276000: exit status 80 (279.653262ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-276000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-276000-m03 already exists in multinode-276000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-276000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-276000-m03: (3.401847199s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.09s)

                                                
                                    
x
+
TestPreload (190.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E1003 20:57:45.113239    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:58:02.030073    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m48.60912642s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-389000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-389000 image pull gcr.io/k8s-minikube/busybox: (6.161459284s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-389000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-389000: (8.401254685s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-389000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-389000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m1.926982846s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-389000 image list
helpers_test.go:175: Cleaning up "test-preload-389000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-389000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-389000: (5.258172973s)
--- PASS: TestPreload (190.53s)

                                                
                                    
x
+
TestSkaffold (126.19s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3469484746 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3469484746 version: (1.7040451s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-665000 --memory=2600 --driver=hyperkit 
E1003 21:03:02.030328    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-665000 --memory=2600 --driver=hyperkit : (38.41044939s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3469484746 run --minikube-profile skaffold-665000 --kube-context skaffold-665000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3469484746 run --minikube-profile skaffold-665000 --kube-context skaffold-665000 --status-check=true --port-forward=false --interactive=false: (1m4.408406403s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7bcf645c47-6zk2z" [7280a7c8-09d0-4b63-ac20-761455c17979] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005205779s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6d6b54d6c5-hpm5m" [372cb401-69b4-41d2-bbd5-94be3d86e055] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003960177s
helpers_test.go:175: Cleaning up "skaffold-665000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-665000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-665000: (5.258498239s)
--- PASS: TestSkaffold (126.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (120.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3881744836 start -p running-upgrade-495000 --memory=2200 --vm-driver=hyperkit 
E1003 21:18:02.064401    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3881744836 start -p running-upgrade-495000 --memory=2200 --vm-driver=hyperkit : (1m15.131155417s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-495000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-495000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (35.271317158s)
helpers_test.go:175: Cleaning up "running-upgrade-495000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-495000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-495000: (5.245112882s)
--- PASS: TestRunningBinaryUpgrade (120.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (1398.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E1003 21:19:11.020878    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:20:10.950343    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:21:34.026811    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:23:02.064652    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:24:11.020123    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:25:10.949270    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:25:34.093934    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:28:02.064270    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:29:11.020902    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:30:10.948504    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (11m45.534877382s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-848000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-848000: (8.397425943s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-848000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-848000 status --format={{.Host}}: exit status 7 (76.664707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E1003 21:31:05.154975    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:33:02.098201    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:34:11.055655    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:35:10.985358    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:38:02.101277    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:38:14.065818    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:39:11.055716    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
E1003 21:40:10.986159    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/functional-042000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (10m54.10702928s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-848000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (419.030792ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-848000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-848000
	    minikube start -p kubernetes-upgrade-848000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8480002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-848000 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E1003 21:42:14.132931    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/skaffold-665000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-848000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (24.80058685s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-848000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-848000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-848000: (5.276912012s)
--- PASS: TestKubernetesUpgrade (1398.70s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19546
- KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1700988039/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1700988039/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1700988039/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1700988039/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.10s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.93s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19546
- KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2719433753/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2719433753/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2719433753/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2719433753/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (151.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3027496368 start -p stopped-upgrade-856000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3027496368 start -p stopped-upgrade-856000 --memory=2200 --vm-driver=hyperkit : (48.671207206s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3027496368 -p stopped-upgrade-856000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3027496368 -p stopped-upgrade-856000 stop: (8.25631115s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-856000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1003 21:43:02.103253    2003 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1440/.minikube/profiles/addons-675000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-856000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m34.288702484s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (151.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-856000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-856000: (2.556612122s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-803000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-803000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (478.157985ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-803000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1440/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1440/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-803000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-803000 --driver=hyperkit : (1m10.134727216s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-803000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-803000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-803000 --no-kubernetes --driver=hyperkit : (16.010458554s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-803000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-803000 status -o json: exit status 2 (165.028422ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-803000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-803000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-803000: (2.459433899s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-803000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-803000 --no-kubernetes --driver=hyperkit : (19.713109396s)
--- PASS: TestNoKubernetes/serial/Start (19.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (144.434528ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-803000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-803000: (2.428041781s)
--- PASS: TestNoKubernetes/serial/Stop (2.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-803000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-803000 --driver=hyperkit : (20.101730078s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (146.513436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                    

Test skip (18/220)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:423: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard